A novel view on bounding execution demand under mixed-criticality EDF

In this paper, we are concerned with scheduling a mix of high-criticality (HI) and low-criticality (LO) tasks under Earliest Deadline First (EDF) on one processor. To this end, the system implements two operation modes, LO and HI mode. In LO mode, HI tasks execute for no longer than their optimistic execution budgets and are scheduled together with the LO tasks. The system switches to HI mode, where all LO tasks are prevented from running, when one or more HI tasks run for longer than expected. Since these mode changes may happen at arbitrary points in time, it is difficult to find an accurate bound on carry-over jobs, i.e., those HI jobs that were released before, but did not finish executing at the point of the transition. To overcome this problem, we propose a technique that works around the computation of carry-over execution demand. Basically, the proposed technique separates the schedulability analysis of the transition between LO and HI mode from that of stable HI mode. We prove that a transition from LO to HI mode is feasible, if an equivalent task set derived from the original is schedulable under plain EDF. On this basis, we can apply approximation techniques such as, e.g., the well-known Devi’s test to derive further tests that trade off accuracy versus complexity/runtime. Finally, we perform a detailed comparison with respect to weighted schedulability on synthetic data illustrating benefits by the proposed technique.


Introduction
There is increasingly important trend in domains such as automotive systems, avionics, and medical engineering towards integrating functions with different levels of criticality onto a common hardware platform.This allows for a reduction of costs and complexity, however, it leads to mixed-criticality (MC) systems that require careful design and analysis.In particular, it must be guaranteed that high-criticality (HI) functions/tasks are not affected by low-criticality (LO) tasks that share the same resources.
In this paper, we are concerned with scheduling a mix of HI and LO tasks under EDF and on one processor-however, a discussion for more levels of criticality is presented in the appendix.In particular, we make use of Vestal's task model (Vestal 2007).That is, LO tasks are modeled by only one worst-case execution time (WCET) (apart from inter-arrival time and deadline), while HI tasks are characterized by an optimistic and by a conservative WCET to account for potential increases in execution demand (Vestal 2007).In this context, a standard real-time scheduling requires guaranteeing that LO and HI tasks meet their deadlines when HI tasks' conservative WCETs are considered.However, since HI tasks run for at most their optimistic WCETs almost all the time, this leads to an inefficient design.On the other hand, if only optimistic WCETs are considered, HI tasks may occasionally miss their deadlines.
To overcome this predicament, a common approach is to implement two operation modes. 1 In LO mode, HI tasks run for their optimistic WCETs and are scheduled within virtual deadlines together with all LO tasks.Virtual deadlines are given by x i ⋅ D i and are usually shorter than real deadlines D i with x i ∈ (0, 1] being referred to as deadline scaling factor.The system switches to HI mode, when one or more HI tasks require running for longer (up to their conservative WCETs).HI tasks are then scheduled within their real deadlines and LO tasks are stopped from running (i.e., discarded).
In such a setting, apart from guaranteeing schedulability of individual modes, it is necessary to guarantee schedulability of transitions between modes.However, since transitions between LO and HI mode can happen at arbitrary points in time, carryover jobs cannot be avoided, i.e., HI jobs that are released prior to, but have not finished executing at the moment of the transition.
Approaches from the literature such as GREEDY (Ekberg and Yi 2012) and ECDF (Easwaran 2013) focus on bounding execution demand by carry-over jobs, resulting in relatively complex algorithms.In this paper, we propose a novel technique that works around the computation of carry-over execution demand.We prove that transitions between LO and HI mode are feasible, if an equivalent task set derived from the original one is schedulable under plain EDF.This not only reduces the overall complexity, but it also improves our understanding on MC scheduling.Moreover, we can apply well-known approximation techniques such as Devi's test (Devi 2003) to trade off accuracy for complexity/runtime.We present a large set of experiments based on synthetic data illustrating the benefits of the proposed approach in term of weighted schedulability and runtime compared to the most prominent approaches from the literature.

3
Real-Time Systems (2021) 57:  The rest of the paper is structured as follows.Related work is briefly discussed in Sect. 2. Section 3 introduces the task model and assumptions used, whereas Sect. 4 deals with the proposed technique for bounding execution demand under mixed-criticality EDF and perform an analytical comparison with the GREEDY and the ECDF algorithm in Sect. 5. Further, we apply Devi's test in Sect.6 to derive two approximated variants of our proposed approach, i.e., based on per-task and on uniform deadline scaling.In Sect.7, we present extensive experimental results evaluating the proposed technique, whereas Sect.8 concludes the paper.Finally, in the appendix, we briefly investigate how to extend the proposed technique to more than two levels of criticality.

Related work
In this section, we briefly revise the rich literature concerning scheduling MC tasks on one processor and, in particular, under EDF-a complete overview can be found in Burns and Davis (2018).
The problem around MC systems was first addressed by Vestal in (2007), who proposed modeling HI tasks with multiple WCET parameters to account for potential increases in execution demand.Based on this model, Baruah et al. later analyzed per-task priority assignments and the resulting worst-case response times (Baruah et al. 2011b).
In Baruah et al. (2011a), Baruah et al. also proposed the EDF-VD algorithm to schedule a mix of HI and LO tasks.EDF-VD introduces two operation modes and uses a priority-promotion scheme by uniformly scaling deadlines of HI tasks.The resulting speed-up factor was shown to be equal to ( √ 5 + 1)∕2 (Baruah et al. 2011a).Later this speed-up factor was improved to 4/3 (Baruah et al. 2012).Baruah et al. further proposed extensions to EDF-VD, where, in particular, a per-task deadline scaling is used (Baruah et al. 2015).However, they also concluded that the speed-up factor of 4/3 cannot be improved (Baruah et al. 2015).Similarly, Müller presented a more general per-task deadline scaling technique that allows improving schedulability (Müller 2016).
Improvements to the original EDF-VD have also been proposed by other authors.In Su and Zhu (2013), Su and Zhu used an elastic task model Kuo and Mok (1991) to improve resource utilization in MC systems.In Zhao et al. (2013), Zhao et al. applied preemption thresholds Wang and Saksena (1999) in MC scheduling in order to better utilize the processing unit.In Masrur et al. (2015), a technique consisting of two scaling factors is proposed for admission control in MC systems.
A more flexible approach referred to as GREEDY with per-task deadline scaling was presented by Ekberg and Yi for the case of two criticality levels (Ekberg and Yi 2012).Ekberg and Yi characterized the execution demand of MC systems under EDF by deriving demand bound functions for the LO and the HI mode.Later, they extended this work to the case of more than two criticality levels (Ekberg and Yi 2014).In Easwaran (2013), Easwaran presented a similar technique called ECDF also for the case of two criticality levels and showed that it strictly dominates the GREEDY approach.
In contrast to GREEDY and ECDF, in Mahdiani and Masrur (2018), we proposed working around the computation of carry-over execution demand at transitions between LO and HI mode.In particular, testing schedulability of transitions from LO to HI reduces to testing schedulability of an equivalent task set under plain EDF (Mahdiani and Masrur 2018).In this paper, we extend this work and propose using approximation techniques, particularly, Devi's test to derive schedulability tests for MC systems under EDF that trade off accuracy versus performance as discussed later in detail.

Models and assumptions
In this section, we discuss most of our notation.Similar to Ekberg and Yi (2012) and Easwaran (2013), we basically adopt the task model originally proposed in Vestal (2007).We consider a set of MC tasks that are independent, preemptable and sporadic running on one processor under preemptive EDF scheduling.Each individual task i in is characterized by its minimum inter-release time T i , i.e., the minimum distance between two consecutive jobs or instances of a task, and by its relative deadline D i with D i ≤ T i ∀i .Further, we assume that tasks do not self-suspend and that the overhead by context switches has been accounted for in tasks' WCETs or can be neglected.
As already stated, we are concerned with dual-criticality systems with two levels of criticality, namely LO and HI. 2 The criticality of a task i is denoted by i with: A LO task is associated with only one WCET value/estimate denoted by C LO i .Opposed to this, a HI task is characterized by its optimistic WCET estimate C LO i and its conservative WCET estimate C HI i with:3 The system basically distinguishes two operation modes denoted by m: LO and HI mode.In LO mode, HI tasks execute for no longer than C LO i , whereas these might require executing for up to C HI i in HI mode.The system initializes in LO mode where all LO and HI tasks need to meet their deadlines.As soon as one job of a HI task executes for longer than its C LO i , the system switches to HI mode discarding all LO tasks.Similar to context switches, we assume that the overhead by mode switches has been accounted for in C HI i .Finally, we denote the utilization by LO and HI tasks in the LO and HI mode respectively as follows: i ∈ {LO, HI}.
1 3 Real-Time Systems (2021) 57:55-94 where again and m can assume values in {LO, HI} .Note that only U LO LO , U LO HI and U HI HI exist.U HI LO is effectively zero, since LO tasks are dropped and, hence, do not run in HI mode.

Mixed-criticality EDF
A common approach when scheduling MC systems is to shorten the deadlines of HI jobs in LO mode.This way, processor capacity can be reserved for a potential switch to HI mode-where HI tasks require more execution demand.In other words, we assign a virtual deadline equal to x i ⋅ D i with x i ∈ (0, 1] to all i with i = HI. This virtual deadline is used instead of D i -the real deadline-to schedule HI tasks in LO mode.The parameter x i is the so-called deadline scaling factor.There is no deadline scaling for LO tasks such that they are scheduled (only in LO mode) using their D i .
When the system switches to HI mode, HI tasks start being scheduled according to their real deadlines D i whereas LO tasks are discarded immediately.In this paper, we consider that tasks are scheduled under EDF in both modes and refer to this scheme as mixed-criticality EDF.
Clearly, whereas schedulability of separated modes can be easily tested, i.e., when the system is stable in either LO or HI mode, it is difficult to test schedulability of transitions between modes.In particular, careful analysis is required when the system switches from LO to HI mode.
In this paper, similar to other approaches from the literature, transitions from HI back to LO mode are disregarded.The reason is that, in contrast to changes from LO to HI, a change from HI to LO mode can be programmed or postponed to a suitable point in time, e.g., at which the processor idles, and does not require further analysis.

The EDF-VD algorithm
EDF with Virtual Deadlines (EDF-VD) is a special case of mixed-criticality EDF for the case D i = T i ∀i .Under EDF-VD, virtual deadlines are obtained by x ⋅ D i , where x ∈ (0, 1] is a uniform deadline scaling factor for all HI tasks (Baruah et al. 2011a).
Under EDF-VD, the LO and HI tasks need to be schedulable with their corresponding C LO i under EDF in LO mode.Similarly, in HI mode, the HI tasks also need to be schedulable with their corresponding C HI i under EDF.As a result, the following two schedulability conditions are necessary: (2) U HI HI ≤1 Note that these two conditions do not ensure schedulability in the transition phase.Hence, they are indeed only necessary, but not sufficient conditions.In Baruah et al. (2012), Baruah et al. also obtained a sufficient schedulability condition for EDF-VD in the form of a utilization bound: max U LO LO + U LO HI , U HI HI ≤ 3∕4 .They also pro- posed a more accurate schedulability test based on whether feasible lower and upper bounds can be obtained on x (Baruah et al. 2012): That is, if the value of x obtained with (3) is less than or equal to the value obtained with (4), then it is possible to find a valid x for the considered system rendering it schedulable under EDF-VD.

Bounding execution demand
In this section, we are concerned with characterizing execution demand by under mixed-criticality EDF.In Ekberg and Yi (2012) and Easwaran (2013), a demand bound function was presented for each mode in the system, i.e., one for the LO mode and one for the HI mode.As mentioned above, we derive a third demand bound function for the transition between modes.This allows working around the computation of carry-over execution demand, reducing the amount of pessimism and, hence, relaxing schedulability conditions in HI mode (Mahdiani and Masrur 2018).

Schedulability in LO mode
In LO mode, LO tasks need to scheduled together with HI tasks, while the latter are assigned virtual deadlines x i ⋅ D i .As a consequence, the resulting demand bound func- tion dbf LO (t) in LO mode is given by: Here t ≥ 0 is a (real) number representing time, i.e., dbf LO (t) returns a 's maximum execution demand in LO mode in an interval of length t.Note that dbf LO (t) is always greater than or equal to zero, since D i ≤ T i holds for all i and x i has values in (0, 1]. (3) (5) 1 3 Real-Time Systems (2021) 57:55-94 The system is schedulable in LO mode, if dbf LO (t) ≤ t holds for all possible t until the processor first idles (Baruah et al. 1990).That is, until a point in time tLO for which the following holds: where removing the floor function leads to: and reshaping to: , we obtain an upper bound on tLO : The bound in (6) depends on the values of x i , which are not known a priori.On the other hand, note that this bound maximizes when all x i tend to 0, which then leads to the following: Clearly, U LO LO + U LO HI -the utilization in LO mode-must be strictly less than one in order that (6) and (7) return valid and finite values.

Schedulability in stable HI mode
In this case LO tasks do not run and HI tasks run for their corresponding C HI i leading to the following demand bound function: where again t ≥ 0 is a (real) number representing time, i.e., dbf HI (t) returns the max- imum execution demand in a time interval of length t.
The system is schedulable in stable HI mode, if dbf HI (t) ≤ t for all possi- ble t until the processor first idles, i.e., until a point in time tHI is reached where dbf HI (t HI ) = tHI .We again can remove the floor function in (8) to obtain an upper bound on tHI : U HI HI -the utilization in HI mode-must be strictly less than one in order that (9) returns a valid and finite upper bound on tHI .

Schedulability in the transition from LO to HI mode
The transition from LO to HI mode may happen at an arbitrary point in time when one HI job exceeds its LO execution budget C LO i .Unfinished LO jobs are discarded at that time; however, the problem arises with HI jobs that were released before but have not finished executing their C LO i , i.e., carry-over jobs.Since it is difficult to accurately bound the execution demand by carry-over jobs, usually, pessimistic assumptions need to taken.
The following theorem, is a generalization of a theorem in Masrur et al. (2015) and allows working around carry-over jobs and characterizing the additional execution demand in the transitions from LO to HI mode in a more effective manner.In other words, this theorem allows us to guarantee schedulability without computing carry-over execution demand at the point of switching from LO to HI mode.
Theorem 1 Given a set of MC tasks, let us assume that(i) dbf LO (t) ≤ t and (ii) dbf HI (t) ≤ t hold for 0 < t ≤ tLO and 0 < t ≤ tHI respectively, i.e., is schedu- lable in LO and stable HI mode.The transition from LO to HI mode is schedulable under mixed-criticality EDF, if dbf SW (t) ≤ t also holds for 0 < t ≤ tSW , where dbf SW (t) is given by: Real-Time Systems (2021) 57:55-94 and tSW is upper bounded by the follow- ing expression: where U SW HI is given by .
Proof Let us consider that the system switches to HI mode at time t ′ and that the processor idles for the first time thereafter at t ′′ with t ′ < t ′′ , i.e., all jobs released prior to t ′′ finish executing at latest by t ′′ .Clearly, jobs that are released after t ′′ are guaranteed schedulable by assumption (ii).
Let us now assume that a deadline is missed for the first time at t miss by a job of any i that we denote miss .Clearly, t miss must be in the interval [t � , t �� ] and the follow- ing must hold for miss = t miss − t � : where i = r � i − t � is the phase of a i at t ′ , i.e., the release time r ′ i of its first job after t ′ minus t ′ .Note that i is in the interval [0, T i ) .In addition, CO denotes the carry- over execution demand at t ′ .This is the amount of execution in [t � , t miss ] by HI jobs that are released prior to t ′ , but have not finished executing at t ′ .
As already discussed, it is difficult to determine CO in an accurate manner.Hence, to work around CO, let us first divide each i , whose jobs have both release times and deadlines in [t � , t miss ] , into two subtasks.The first subtask-denoted by LO i -is released for the first time at i and requires executing C LO i within x i ⋅ D i every T i time, i.e., this represents i 's execution demand in LO mode.The second subtaskdenoted by SW i -is released for the first time at

this presents
i 's increase in execution demand incurred in HI mode.Note that, in spite of this modification, the total amount of execution demand in [t � , t miss ] does not change, i.e., a deadline is still missed at t miss as per assumption, and we can reshape the above inequality to: Note that t miss coincides with the deadline of the corresponding job of SW miss , which misses its deadline.(Recall that miss is now divided into the subtasks LO miss and SW miss .)Now, there are two possible cases to consider in order to prove this theorem: The set of only subtasks SW i is either (1) unschedulable or (2) schedulable in isolation.
Case (1): This is a rather trivial case.If the set of only SW i is unschedulable in isolation, i.e., when scheduled alone on a single processor, dbf SW (t) > t must hold for some t in [0, tSW ] with dbf SW (t) given as per (10).As a result, we will be able to detect a deadline miss in the transition between LO and HI mode by only testing the set of all SW i .To this end, we need to find an upper bound on tSW making dbf SW (t SW ) = tSW and removing the floor function as before: is the utilization of the set of only SW i .Since U SW HI < 1 holds by assumption (ii), the above inequality returns a valid bound on tSW .This depends on ΔD i = D i − x i ⋅ D i and, therefore, on the values of x i , which we do not know in advance.However, we can make x i = 1 for all i leading to the upper bound in (11).

Case (2):
We show that this case leads to a contradiction.To this end, recall that we assumed that a deadline is missed for the first time at t miss , hence, all previous jobs in [t � , t miss ) can actually finish executing in time.Since now miss is divided into the subtasks LO miss and SW miss , the LO miss 's job with a deadline equal to t miss − ΔD miss must push carry-over SW i 's jobs (i.e., those that are released prior to and have not finished executing at t miss − ΔD miss and that have deadlines prior to t miss ) by at least Δ miss being Δ miss the amount of the deadline miss at t miss .Otherwise, no deadline can be missed at t miss , since dbf SW (t) ≤ t is assumed to hold for all t.In addition, note that the processor does not idle in [t miss − ΔD miss , t miss ].
Case (2.a): Let us assume that there is only one carry-over job of an arbitrary SW i and that ΔD i ≤ ΔD miss holds.Note that, after moving this job forward to force its release time to coincide at t miss − ΔD miss , SW miss 's job continues to miss its deadline by at least Δ miss at t miss , since the deadline of this carry-over SW i 's job remains within the interval [t miss − ΔD miss , t miss ] .As a result, the amount of execution demand in [t miss − ΔD miss , t miss ] remains the same or even increases after this displacement-see Fig. 1.
The above analysis leads to a contradiction, since SW miss 's job and its carry-over SW i 's job are now released in synchrony at t miss − ΔD miss .Consequently, LO miss cannot push any additional execution demand into [t miss − ΔD miss , t miss ] and, hence, if a deadline is missed at t miss , dbf SW (t) ≤ t cannot hold for all t.

Case (2.b):
Let us now assume that there is again only one carry-over job of an arbitrary SW i , however, ΔD i > ΔD miss holds this time.Note that we can displace this carry-over SW i 's job forward until its deadline occurs at t miss + , where is an infinitesimally small number greater than zero.As a result, this SW i 's job starts missing its deadline by an amount equal to at least Δ miss − , since at least the origi- nal execution demand in [t miss − ΔD miss , t miss ] is executed in [t miss − ΔD miss , t miss + ] -see Fig. 2.
It is easy to see that we can now apply the analysis of Case (2.a) where the carryover SW i 's job of this case misses its deadline at t miss + (by an amount of at least Δ miss − ) and the SW miss 's job of this case becomes the carry-over job in Case (2.a).
tSW ≤ 1 3 Real-Time Systems (2021) 57:55-94 As a result, Case (2.b) also leads to a contradiction, i.e., if a deadline is missed at t miss , dbf SW (t) ≤ t cannot hold for all t.
Clearly, there can be several carry-over jobs whose execution demands are pushed (at least partially) by LO  miss into [t miss − ΔD miss , t miss ] , however, the total amount of execution demand pushed by LO miss remains Δ miss .In this latter case, again, it is easy to see that we can apply the analysis of Case (2.a) and Case (2.b) to each individual carry-over job.Consequently, if a deadline is missed at t miss , dbf SW (t) > t must hold for some t.The theorem follows.◻ Theorem 1 allows characterizing the additional execution demand in the transitions from LO to HI mode independent of carry-over jobs.Based on it, we test whether deadlines are met or not in [t � , t �� ] , i.e., from the time t ′ of switching to HI mode to the time t ′′ at which the processor first idles after switching.

Finding deadline scaling factors
Now, we propose an algorithm to find valid values of x i for each HI task in .Clearly, this is closely related to the technique used to tighten deadlines in LO mode.In this paper, however, we do not aim to improve deadline tightening.The contribution is rather a new technique for bounding demand execution, which can be combined with existing deadline tightening techniques, e.g., from Ekberg and Yi (2012) or Easwaran (2013).
The proposed algorithm shown in Algorithm 1 essentially tests 's schedulability in the LO mode (line 1), and in HI mode (line 2).If is schedulable in LO mode, the function testLO() returns a vector X LW with the minimum values of x i that could be found to be valid.If this vector is not empty, i.e., valid x i values could be found, and is schedulable in HI mode, Algorithm 1 tests schedulability at the transitions from LO to HI mode (line 3).
Further, if the set of HI tasks in -denoted by HI -is schedulable at transitions from LO to HI, the function testSW() returns a vector X UP with the minimum val- ues of 1 − x i that are also valid.That is, if X UP is neither empty, the whole will be schedulable under mixed-criticality EDF provided that X LW ≤ − X UP holds (line 1 3 Real-Time Systems (2021) 57:55-94 4).Here, denotes a unity vector (where all elements are equal to one).That is, for each element in the vectors X LW and X UP , the following condition must hold: As already mentioned, the functions testLO() and testSW()-shown in Algorithms 2 and 3-test schedulability in LO mode and at the transitions from LO to HI mode.These two functions are very similar-apart from testLO() dealing with the whole and testSW() with the subset HI -and return lower bounds on x i and on 1 − x i respectively.Thus, the following discussion of testLO() also applies to testSW().
Basically, testLO() computes dbf LO (t) for all 0 ≤ t ≤ tLO starting from x i = 1 for all HI tasks.However, at least the first deadline of each task should be checked, since we need to compute each x i .That is, we need to compute dbf LO (t) at least until D max = max ∀i (D i )-see line 3 in Algorithm 2. If the current t corresponds to a dead- line of a HI task (lines 10-14), its (relative) virtual deadline x i ⋅ D i is adjusted such that its absolute deadline r i + x i ⋅ D i is equal to dbf LO (t) (i.e., the total execution demand at t).
Note that the execution demand of jobs with prior deadlines to t is contained in dbf LO (t) .As a result, the computed x i in line 12 can never compromise schedula- bility of these previous jobs.In addition, an x i currently being computed can only replace a previously computed x i , if either this is the first value of x i computed for the corresponding i , i.e., t coincides with the first deadline of i , or this is greater than the previous value of x i (lines 11 to 13).In other words, after initialization, the values of x i that are selected never shorten i 's virtual deadline x i ⋅ D i .As a result, we do not need to test i 's past deadlines anew.This deadline tightening technique reduces the number of possibilities for x i , but it also keeps the algorithm simple.
Computed(i) in line 11 returns 'false', if no x i has been computed yet for the cur- rent i, which we need to flag whether a value of x i has been already computed for the current i or not.The function getNextDeadline() in line 15 returns the point in time t at which the next deadline occurs and the index i of the task corresponding to that deadline.Clearly, this function has to take the computed values of x i into account.
The function testLO() succeeds if it finishes testing dbf LO (t) for 0 ≤ t ≤ max(D max , tLO )4 and it could find a value of x i in (0, 1] for each HI task in .On the other hand, testLO() fails, if dbf LO (t) > t holds for some t and either t cor- responds to a deadline of a LO task-whose deadline cannot be adjusted by the used tightening technique-or the resulting x i becomes greater than 1 (lines 4 to 8).

3
Real-Time Systems (2021) 57:55-94 Analogous to testLO(), testSW() computes dbf SW (t) for all deadlines in the inter- val 0 ≤ t ≤ max(D max , tSW ) starting from 1 − x i = 1 for all HI tasks-recall that dead- lines in dbf SW (t) are equal to (1 − x i )D i .Otherwise, as mentioned above, testLO() and testSW() are very similar and, hence, the above explanation for testLO() also applies to testSW().Finally, the function testHI() in Algorithm 1 is the known schedulability test for EDF from the literature (Baruah et al. 1990) and, hence, does not require further discussion.

Analytical comparison
In this section, let us first compare the proposed demand bound functions from Sect. 4 with those used by Ekberg and Yi in the GREEDY algorithm (Ekberg and Yi 2012) and by Easwaran in the ECDF algorithm (Easwaran 2013).We show, for most cases, that the proposed ones result in tighter bounds on the execution demand than the other mentioned approaches.

The GREEDY algorithm
In LO mode, note that dbf LO (t) in ( 5) is identical to that of Ekberg and Yi-denoted by gdbf LO (t) in this paper.That is, dbf LO (t) = gdbf LO (t) for all 0 < t ≤ tLO .
In HI mode, Ekberg and Yi proposed a demand bound function-denoted gdbf HI (t) in this paper-which is given by the following expression (Ekberg and Yi 2012): and done i (t) is given by: Note that gdbf HI (t) bounds the execution demand in HI mode taking transitions into account.In our case, as discussed above, we derive different bounds on the execution demand at transitions and in stable HI mode, i.e., dbf SW (t) and dbf HI (t) respectively.Now, since done i (t) ≤ C LO i holds for all valid values of t and x i , dbf SW (t) ≤ gdbf HI (t) also holds for all t.In other words, if the transition to HI mode is safe by gdbf HI (t) , it will also be safe by the proposed dbf SW (t) .However, this does not hold the other way around, i.e., dbf SW (t) results in a tighter bound on the execution demand at transitions from LO to HI mode than gdbf HI (t).
On the other hand, if transitions are safe, it is guaranteed that no deadlines are missed after switching to HI mode at a t ′ and until the processor first idles at a t ′′ .From t ′′ onwards, it is easy to see that our proposed dbf HI (t) is sufficient and necessary.That is, if dbf HI (t) ≤ t does not hold for some t with 0 ≤ t ≤ tHI , then the system is not feasi- ble, i.e., it will be neither be feasible by gdbf HI (t).

The ECDF algorithm
Similar to the case of the GREEDY algorithm, note that dbf LO (t) in ( 5) is identical to that used in the ECDF algorithm in LO mode.We denote this latter by edbf LO (t) in this paper.That is, dbf LO (t) = edbf LO (t) for all 0 < t ≤ tLO .
In HI mode, the ECDF algorithm uses a demand bound function-denoted edbf HI (t) in this paper-which is given by the following expression (Easwaran 2013): where ΔC i is defined as Real-Time Systems (2021) 57:55-94 Here t 1 represents the point in time at which the system switches to HI mode (i.e., t 1 = t � in our notation) and t 2 is the point in time at which a deadline is potentially missed (i.e., t 2 = t miss ≤ t �� in this paper).Note that dbf HI i (⋅) is a i 's contribution to dbf HI (⋅) shown in (8).HI tasks in ( 14) are classified into three cases: case 1 which plays a role in computing dbf L j (⋅) , case 2, and case 3.For details on how to compute dbf L j (⋅) for 1 ≤ j ≤ 3 and how to compute CO(⋅) we refer to Easwaran (2013).
The system is schedulable in HI mode, if edbf HI (t 1 , t 2 ) ≤ t 2 holds. (Here again no distinction is made between transition and stable HI mode.)Let us now consider that t 1 is less than or equal to ∑ 3 j=1 dbf L j (t 1 , t 2 ) , such that the schedulability condition by ECDF now becomes: Easwaran proved that the left-hand side of the above condition is equal to gdbf HI (t 2 − t 1 ) (Easwaran 2013).As a consequence, the proposed dbf SW (t) results in a tighter bound than (15), since dbf SW (t) ≤ gdbf HI (t) holds for all t-see again the above Sect.5.1.
In the case where t 1 is greater than ∑ 3 j=1 dbf L j (t 1 , t 2 ) , the analytical comparison between edbf HI (⋅) and dbf SW (⋅) becomes difficult.This is the case where some amount of the execution demand given by edbf HI (⋅) starts being executed before t 1 , at t 1 − ∑ 3 j=1 dbf L j (t 1 , t 2 ) to be more precise.Whether a proof of dominance exists (in either way) remains an open problem.
At least, from the above discussion, we can assert that the proposed dbf SW (⋅) is tighter for the more stringent case, i.e., when no execution demand by edbf HI (⋅) can be executed before t 1 .Our experiments in the following section present evidence that this also holds on average, i.e., dbf SW (⋅) usually results in tighter bounds, particu- larly, when the number of HI task increases.

Applying approximation techniques
In this section, we apply approximation techniques, particularly, Devi's test (Devi 2003) to derive two variants of the proposed approach trading off accuracy for the sake of lesser complexity/runtime.The first variant computes one deadline scaling factor per HI task, similar to the proposed approach, but with less accuracy to reduce complexity.In contrast to it, the second variant computes only one deadline scaling factor for all HI tasks requiring, in principle, less computation.However, as discussed later in detail, this does not necessarily lead to a lower complexity.

Revisiting Devi's test
Let us consider a set τ of non-MC tasks that are independent, preemptable and spo- radic.Similar to tasks in defined in Sect.3, each individual i in τ is defined by its (15) minimum inter-release time T i and its relative deadline D i being D i ≤ T i .However, in contrast to tasks in , tasks in τ are characterized by only one WCET parameter denoted by C i .
In addition, let us assume that tasks in τ are sorted in the order of non-decreas- ing deadlines, i.e., D i ≤ D j holds, if i < j holds where i and j are indices identifying tasks.Devi's test states that τ is feasible on one processor under preemptive EDF scheduling, if the following condition holds for 1 ≤ k ≤ | τ | (where | τ | denotes the number of tasks in τ ) (Devi 2003):   resulting in a sufficient, but not necessary test with polynomial complexity.As opposed to it, an exact test for τ-based on the concept of demand bound functionleads to a higher, pseudo-polynomial complexity (Baruah et al. 1990).

Per-task deadline scaling
We can now derive the first approximated variant of our proposed approach computing a scaling factor for each individual HI task in .To this end, we use Devi's test to analyze each mode and the transition between them.

Schedulability in LO mode
When applying Devi's test in LO mode, the demand bound function in (5) reduces to the following condition: where 1 ≤ k ≤ | | and | | denotes the total number of tasks in .Tasks in (17) need to be sorted in order of non-decreasing real deadlines D i for i = LO or virtual dead- lines x i ⋅ D i for i = HI .Note that the order of tasks might change depending on the values of x i .As a result, (17) might need to be recomputed for the corresponding tasks, if their relative order changes. (16 Real-Time Systems (2021) 57:55-94 Further, in (17), we have considered that the current k (i.e., for the current value of k) is a HI task.As a result, a deadline scaling factor x k needs to be computed.If k is a LO task, the second term of ( 17) is divided by D k instead of x k ⋅ D k , since LO tasks are scheduled within their real deadlines.In this case, no x k needs to be computed.

Schedulability in stable HI mode
Now, when applying Devi's test in HI mode, the demand bound function in (8) reduces to the following condition: where again 1 ≤ k ≤ | | holds.Note that only HI tasks are considered in (18), which need to be sorted in order of non-decreasing D i .

Schedulability in the transition from LO to HI mode
The demand bound function in (10) reduces to the following condition after applying Devi's test: Similar to sta- ble HI mode, only HI tasks are considered in (19).However, this time, tasks are sorted in the order of non-decreasing ΔD i instead.Similar to the LO mode, the order of tasks might change depending on the values of x i .In this case, (19) needs to be recomputed for all tasks whose relative order changes for a newly computed x k .

Finding deadline scaling factors
As already discussed, (17) needs to hold for all k in 1 ≤ k ≤ | | , which is tested in an iterative manner.Clearly, x k only needs to be computed for k = HI .To this end, let us first reshape (17) to the following: which we can then solve for x k leading to: Note that we can change the upper limit of the first summation in the numerator to k − 1 , since k = HI holds.Further, reshaping the denominator, we finally obtain: As already mentioned, we might need to recompute (17) for all tasks whose relative order changes depending on the value of x k .To avoid this complication, we derive the following lower bound for x k : where HI respectively.In words, (21) guarantees that the selected x k does not change the order of tasks in LO mode to avoid having to recompute (17), clearly, at the cost of a lesser accuracy.Now, we can obtain an upper bound on x k reshaping (19) to: Real-Time Systems (2021) 57:55-94 Solving for 1 − x k and reshaping as before, we obtain: Similar to before, to avoid having to recompute (19), we need to prevent that the order of tasks changes, for which we derive the following upper bound on x k : Finally, a system is feasible under mixed-criticality EDF, if ( 17) holds for all k and (18) holds for all k with k = HI .In addition, we need to find a valid x k for every k with k = HI .That is, x k must be greater than or equal to the maximum between (20) and (21).Simultaneously, x k must be less than or equal to the minimum between ( 22) and (23).

Uniform deadline scaling
We can also apply Devi's test to derive a second approximated variant of our proposed approach, computing only one deadline scaling factor for all HI tasks in .In principle, this requires less computation than our first approximated variant.However, in contrast to what is expected, this second variant does not result in a lower complexity, since it cannot prevent the order of tasks from changing as discussed below.

Schedulability in LO mode
We again apply Devi's test to the demand bound function in (5), but considering this time a uniform deadline scaling factor x: where 1 ≤ k ≤ | | and | | denote the total number of tasks in .Just as before, tasks in (24) need to be sorted in order of non-decreasing real deadlines D i for i = LO or virtual deadlines x ⋅ D i for i = HI .Although the relative order of HI tasks (among themselves) never changes, the order of LO tasks with respect to HI tasks might still change for some x.If that happens, (24) needs to be recomputed for the corresponding tasks.In ( 24), note that the current k is considered to be a HI task and, hence, x needs to be computed.If k is a LO task, the second term of ( 24) is divided D k instead of x ⋅ D k , since LO tasks are scheduled within their real deadline.In this case, no x is computed; however, (24) still needs to hold for the previously selected x.

Schedulability in stable HI mode
When considering a uniform deadline scaling factor x, we still obtain (18) as a result of applying Devi's test to (9).Hence, this case requires no further discussion.

Schedulability in the transition from LO to HI mode
The demand bound function in (11) reduces to the following condition after applying Devi's test for a uniform deadline scaling factor x: and ΔC i is defined as before.Only HI task are considered in (25) and tasks are sorted in the order of non-decreasing (1 − x)D i .This time, however, the order of tasks cannot change for different values of x, since x affects all deadlines the same. (24 Real-Time Systems (2021) 57:55-94

Finding a uniform deadline scaling factor
Proceeding as before, we can obtain a lower bound on x from (24): Note that the order between some HI and LO tasks might change for a given x, requiring us to recompute (24).As mentioned above, in contrast to the per-task deadline scaling, it becomes difficult to derive an additional lower bound on x that prevents this from happening.We can, of course, select an x for the current k such that D LO k−1 ≤ x ⋅ D k holds, where D LO k−1 represents k−1 's deadline in LO mode (independent of whether this is a LO or HI task).However, this is not sufficient, since a new value of x also affects all previously tested HI tasks, whose deadlines might become smaller than some deadline of a LO task.
An upper bound on x can be obtained from ( 25), which leads to: According to this second variant of our proposed approach, the system is feasible under mixed-criticality EDF, if ( 24) holds for all k and ( 18) holds for all k with k = HI .In addition, none of the values of x obtained with ( 27) should be less than any value obtained by ( 26) for 1 ≤ k ≤ | | with k = HI.

Complexity
Similar to GREEDY (Ekberg and Yi 2012) and ECDF (Easwaran 2013), the proposed approach of Sect. 4 is based on computing demand bound functions and, hence, has a pseudo-polynomial complexity O(K n ⋅ n) for task sets with a total utili- zation that is strictly less than 1 (Baruah et al. 1990).Note that n represents the number of tasks in the given task set and K n is a factor that depends on task parameters ( 26) .
and, therefore, on n.In contrast to this, EDF-VD (Baruah et al. 2012) has a linear complexity O(n).
The approximated variant in Sect.6.2, based on computing per-task deadline scaling factors, has a polynomial complexity O(n ⋅ log n) , when the bounds in ( 21) and ( 23) are considered.As explained above, these prevent that the order of tasks changes for a newly computed deadline scaling factor.If ( 21) and ( 23) are not considered, the order of tasks might change requiring us to retest some of the (previously tested) tasks and, hence, leading to a quadratic complexity O(n 2 ) instead.
On the other hand, the complexity of our second approximated variant of Sect.6.7 is quadratic O(n 2 ) , since, in the general case, i.e., with at least one LO task in the system, it cannot guarantee that the order of tasks does not change.As a result, retesting cannot be avoided.

Experimental evaluation
In this section, we present evaluation results based on synthetic data.The intention is to show how the different algorithms behave with respect to each other and not to provide any absolute performance metrics.
In particular, we compare the proposed approach of Sect. 4 with EDF-VD (Baruah et al. 2012), with the GREEDY algorithm by Ekberg and Yi (2012), and with ECDF by Easwaran (2013).Note that we had to modify EDF-VD to consider the deadlines D i of tasks instead of their inter-arrival times or periods T i , i.e., to consider the tasks' densities instead of their utilizations, to account for the case of constrained deadlines D i ≤ T i and be compared with the other algorithms in a mean- ingful manner.
In addition, we include our first approximated variant from Sect.6.2 in this comparison.As discussed in Sect.6.12, this is the one with the lowest complexity and helps illustrating how much performance can be attained with the proposed technique at the least possible cost.

Schedulability curves
Figure 3 shows schedulability curves, i.e., the percentage of feasible task sets by the above algorithms, versus LO utilization.For every increase in LO utilization, a total number of 1000 different sets of 20 tasks each were randomly generated-10,000 task sets in total.We made use of UUniFast (Bini and Buttazzo 2005) to generate individual task utilizations.Further, we used the log-uniform distribution proposed by Emberson et al. (2010) to create the task periods T i in the range of 1-1000 ms.The log-uniform distribu- tion guarantees that task periods are equally spread into the time bands 1-10 ms, 10-100 ms, etc.
With T i and the task utilization, we obtained the values of C LO i .We assumed that 30% of the tasks are HI tasks, i.e., 6 out of 20 tasks.Further, for each HI task, we randomly selected an increase in HI execution demand of at most 50% of . With this, we then obtained the values of C HI i .Deadlines D i are constrained and chosen from a uniform distribution in the range [C HI i , T i ] for HI tasks and in [C LO i , T i ] for LO tasks.
As depicted in Fig. 3, expectedly, the percentage of schedulable task sets decreases with an increasing LO utilization.On the other hand, whereas all algorithms perform similarly for a LO utilization below 50% , they exhibit different behaviors for higher LO utilizations.In particular, the proposed approach outperforms ECDF by around 10-20% more accepted task sets in the range of 60 to 100% LO utilization.Interestingly, in spite of having a lesser complexity, our approximated variant shows a similar performance to the GREEDY algorithm.Fig. 4 Weighted schedulability versus total number of tasks for 30% HI tasks and 50% increase of HI execution demand

Weighted schedulability
Next, we make use of the concept of weighted schedulability (Bastoni et al. 2010;Davis 2016) to analyze the performance by the above algorithms.That is, for a schedulability test A whose accuracy on testing a task set is a function of parameter p, its weighted schedulability W A (p) is given by: where U( ) is the utilization of a given and S A ( , p) is A's binary result (1 if sched- ulable and 0 if not) for a task set with parameter value p.In other words, individual schedulability results by A are weighted according to the utilization of the task sets tested, putting more emphasis on higher-utilization ones.
We created weighted schedulability curves varying following parameters: (i) total number of tasks, (ii) percentage of HI tasks, (iii) increase of HI execution demand and (iv) range of task periods.Every time we varied one of these parameters, we generated 1000 different task sets for each LO utilization value between 0 and 100% at steps of 10% , i.e., a total of 10,000 task sets per marker on the shown curves.To this end, we proceeded as described previously to obtain task parameters.
Figure 4 shows weighted schedulability curves for a varying total number of tasks where we selected the number of HI tasks to be equal to 30% of the total (i.e., it also varies proportionally) and the increase of HI execution demand to be 50% of the LO execution demand for each HI task.The proposed approach outperforms all other algorithms by around 10-20% depending on the total number of tasks.In general, the more the tasks, the better the proposed algorithm performs with respect to the others.It is interesting to note that ECDF performs better than GREEDY up to 30 tasks per set, after which GREEDY performs better.Further, ECDF seems to stagnate at around 80% weighted schedulability, in contrast to GREEDY and the proposed approach.
Our approximated variant has a good performance up to around 40 tasks, after which it decays pronouncedly, even becoming worse than EDF-VD from 80 tasks onwards.The reason is that this algorithm is based on Devi's test, which requires tasks to be sorted by deadline.Since the order of tasks may change with every new deadline scaling factor, some tasks may need to be retested as explained in Sect.6.2.Clearly, the more tasks there are, the more likely it is that their order changes (when scaling one deadline).To avoid this and reduce complexity, we introduced conditions ( 21) and ( 23), which truncate the valid range of deadline scaling factors.This has the downside, however, that the number of wrongly rejected task sets increases disproportionally as the total number of tasks grows.
Weighted schedulability curves for a varying percentage of HI tasks are shown in Fig. 5.This time, we selected the total number of tasks to be 20, whereas the increase of HI execution demand continues to be 50% as in the previous case.We can see that the performance of all approaches decreases with an increasing number of HI tasks.Up to around 20% HI tasks (i.e., 4 out of 20), the pro- posed approach and ECDF behave similarly.However, ECDF's performance then decreases rapidly, becoming worse than GREEDY and even EDF-VD at 50% and 60% HI tasks respectively.
At around 80% HI tasks (16 out of 20 tasks), the proposed approach still allows for around 80% schedulable task sets independent of the LO utilization, whereas all other algorithms are at or below 40% schedulable task sets.This evidences the effectiveness of the proposed approach over GREEDY and ECDF for general cases.In particular, GREEDY and ECDF are based on estimating the worstcase contributions by carry-over jobs at the moment of switching from LO to HI mode.This inevitably becomes pessimistic as the number of carry-over jobs grows, which directly depends on the number of HI tasks.In the case of our approximated variant, again, conditions ( 21) and ( 23) start dominating in Fig. 5.Even though the total number of tasks remains constant, these two conditions are evaluated for each HI task.As a result, if the number of HI tasks increases, they start playing a bigger role and, hence, accentuating the decrease in performance.
In Fig. 6, we further present weighted schedulability curves for a varying increase of HI execution demand.We again selected the total number of tasks to be 20 and the number of HI tasks is set to 30% .In this case, the behavior of algorithms slightly worsens for a growing HI execution demand with exception of ECDF, whose behavior slightly improves.In spite of this, the proposed algorithm outperforms all others by around 10% more schedulable task sets in the range of 10-80% increase in HI execution demand.Interestingly, our approximated variant also shows a good Real-Time Systems (2021) 57:55-94 performance in this range, which is even better than that of the GREEDY algorithm up to 60% increase in HI execution demand.Last, Fig. 7 shows weighted schedulability curves for a varying range of task periods with the total number of tasks being again 20, out of which 30% are HI tasks with a 50% increase in HI execution demand.In this case, the performance of all algorithms rapidly goes down for an increasing range of task periods.The proposed approach outperforms ECDF by 10-20% more schedulable task sets when the minimum and the maximum task period are 3 to 4 orders of magnitude apart.Note that our approximated variant outperforms GREEDY for period ranges of 2.5 orders of magnitude upwards and has a comparable performance to that of ECDF between 3.5 to 4 orders of magnitude.

Runtime comparison
Figures 8 to 10 show a comparison of runtime versus LO utilization, total number of tasks and range of task periods respectively.However, note that we have implemented the different algorithms in Matlab and, hence, they can be further optimized, potentially changing their behavior with respect to runtime.Now, ECDF is around one to two orders of magnitude faster than GREEDY and the proposed approach depending on LO utilization as shown in Fig. 8.Our approximated variant is around one order of magnitude slower than EDF-VD and around one to two orders of magnitude faster than ECDF.This behavior remains almost unchanged as the number of tasks increases towards 100 tasks per set-see Fig. 9.
Here we maintained the percentage HI tasks and the increase of HI execution demand equal to 30% and 50% respectively.Only for 10 tasks per set, GREEDY and the proposed algorithm are as fast as ECDF.
Figure 10 shows runtime curves for an increasing range of task periods.We again kept the total number of tasks at 20, the percentage of HI tasks at 30% and the increase of HI execution demand at 50% .As expected, our approximated variant and EDF-VD have a constant runtime for an increasing range of periods, since both polynomial complexity.All other algorithms experience an increasing runtime for greater period ranges.ECDF continues to be around one order of magnitude faster than GREEDY and the proposed algorithm.However, this difference reduces as the range of task periods grows.At 4 orders of magnitude between the minimum and the maximum period, all these algorithms show the same runtime.
It should be noted that Figs. 9 and 10 are independent of LO utilization.For each marker on these curves, we generated 1000 different task sets for each LO utilization value between 0 and 100% at steps of 10% , i.e., a total of 10,000 task sets per marker.

Concluding remarks
In this paper, we studied the problem of mixed-criticality scheduling under EDF, where a set of low-criticality (LO) and high-criticality (HI) tasks share the processor.Similar to the literature, we characterize execution demand in a mixed-criticality task set by deriving demand bound functions in the different operation modes, i.e., HI and LO mode.Particularly, we handle transitions from LO to HI mode separately from the stable HI mode, which allows us to work around carry-over jobs and, therefore, to reduce pessimism in estimating execution demand under mixed-criticality EDF.
It is interesting to notice that the proposed approach reduces the problem of testing schedulability under mixed-criticality EDF to testing schedulability of three almost unrelated task sets: the one in LO mode, the one in HI mode and the equivalent task set for transitions between LO and HI mode.This allows applying wellknown approximation techniques that trade off accuracy for the sake of a lesser complexity/runtime as illustrated for the case of Devi's test.
Further, we conducted a large set of experiments on synthetic data showing the effectiveness of the proposed approach and of its approximated variant in terms of Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creat iveco mmons .org/licenses/by/4.0/.

Multiple levels of criticality
In practice, usually more than two levels of criticality are common, e.g., four different automotive safety integrity levels (ASIL) are defined in the ISO 26262 standards.To account for this, we illustrate how to apply the proposed approach to add a third level of criticality between LO and HI: the mid-criticality (MI) level.Note that additional levels of criticality can also be added in an straightforward manner based on the presented analysis.
Just as before, the system implements a mode of operation per criticality level resulting now in three modes: LO, MI, and HI mode.Further, tasks are classified according to their criticality i into LO, MI and HI tasks.LO tasks only run in LO mode and are discarded in MI and HI mode.MI tasks run in LO and MI mode, but are discarded in HI mode, whereas HI tasks run in all modes.
All tasks are defined by their minimum inter-release times T i and their relative deadlines D i .LO task are characterized by their WCET parameter C LO i , whereas MI tasks have a C LO i and a C MI i parameter, which denote their WCET in LO and MI mode respectively.HI tasks now have three WCET parameters, i.e., C LO i , C MI i , and

Ordered mode switches
We first consider ordered mode switches.That is, the system switches from LO to MI, if either a MI or a HI task runs for more than its C LO i in LO mode, and from MI to HI mode, if a HI task runs for more than its C MI i in MI mode.Note that there is no direct transition from LO and HI mode, i.e., the system first switches to MI and then to HI mode.The necessary extensions for unordered mode switches, i.e., when the system switches from LO directly to HI mode, are discussed below.Next, for ease of exposition, we first analyze schedulability in MI mode, then in HI mode, and last in LO mode.

Schedulability in stable MI mode
In MI mode, LO tasks are discarded and MI tasks are scheduled together with HI tasks.MI tasks are scheduled within their real deadline, however, HI tasks are assigned virtual deadlines y i ⋅ D i .Here, y i ∈ (0, 1] denotes the per-task scaling factor in MI mode.Both MI and HI run for their corresponding C MI i leading to the following demand bound function-which resembles (5): Now, proceeding as before, we obtain an upper bound on tMI , i.e., the point in time until which we need to check feasibility, i.e., that dbf MI (t) < t holds.With and letting y i tend to 0, we obtain:

Schedulability in stable HI mode
Only HI tasks are allowed to run and they are scheduled within their real deadlines in HI mode.Hence, dbf HI (t) is given by ( 8) and the upper bound on tHI is given by (9), which requires no further discussion.

Schedulability in the transition from MI to HI mode
We can apply Theorem 1 to obtain the demand bound function at transitions from MI to HI mode.Note that this also resembles (10): (29) Real-Time Systems (2021) 57:55-94 Similarly, we proceed to obtain an upper bound on tSW1 , i.e., the point in time until which dbf SW1 (t) < t needs to be checked.The resulting expression resembles (11) with U SW1 HI given by ∑

Schedulability in LO mode
In LO mode, LO tasks need to be scheduled together with MI and HI tasks.MI tasks are assigned virtual deadlines x i ⋅ D i , while HI tasks are assigned virtual deadlines x i ⋅ y i ⋅ D i .That is, their virtual deadlines in MI mode (i.e., y i ⋅ D i ) are again scaled by x i ∈ (0, 1] .As a consequence, the resulting demand bound function dbf LO (t) in LO mode is given by: We can proceed as before to obtain an upper bound on tLO .That is, the point in time until which we need to check that dbf LO (t) < t holds.In addition to U LO LO and U LO HI , considering and letting x i tend to 0, we obtain: where it should be noted that the second summation on the right-hand side applies to both MI and HI tasks in the system.

Schedulability in the transition from LO to MI mode
We can again apply Theorem 1 to obtain the demand bound function for transitions from LO to MI mode: Similarly, we proceed to obtain an upper bound on tSW2 , i.e., the point in time until which dbf SW2 (t) < t needs to be checked: where U SW2 MI is given by and U SW2 HI is given by .

Finding deadline scaling factors
In contrast to the case of two levels of criticality, we now have to compute two deadline scaling factors y i and x i .We can still use the proposed approach from Sect. 4, but in an iterative manner.That is, we first use the proposed approach to obtain y i , i.e., the deadline scaling factor in MI mode.Once we have the values of y i , we can apply this approach again to find x i , i.e., the deadline scaling factor in LO mode.

Unordered mode switches
If we were to allow for unordered mode switches, i.e., from LO directly to HI mode in the above setting with three levels of criticality, we need to consider it separately.To this end, we assume that a subset of the HI tasks cause a direct transition to HI mode (instead of MI mode as assumed so far) when running for more than C LO i in LO mode.5As a result, in LO mode, we now have: Real-Time Systems (2021) 57:55-94 where z i is a deadline scaling factor that guarantees schedulability for the direct tran- sition from LO to HI mode.In HI mode, again, dbf HI (t) given by (8) continues to be valid.As a result, with all x i obtained as discussed for the case of ordered mode switches, we can compute each z i in (37) also based on the approach from Sect. 4. Finally, to guarantee safety independent of whether the system switches to MI or HI mode, HI task will now have to be scheduled in LO mode using the minimum between x i ⋅ y i that covers ordered transitions and z i that accounts for the unordered case.For more than three levels of criticality, note that all possible unordered mode

Uniform distribution for task periods
In Sect.7, we used the log-uniform distribution proposed by Emberson et al. (2010) to generate task periods in [1, T max ] , where T max was set to 1000 in the default case.The log-uniform distribution equally spreads task periods into the time bands 1 − 10 , Real-Time Systems (2021) 57:55-94 10 − 100 , etc. and, hence, the resulting task sets have an equal number of tasks in each such bands.In contrast to this, a uniform distribution tends to concentrate task periods in the middle of [1, T max ] , resulting in task sets where most tasks have periods of the same order of magnitude around 500 for T max = 1000 .Task sets generated this way lead to different performance by algorithms as shown below in Fig. 11 for schedulability and in Figs. 12, 13, 14 and 15 for weighted schedulability.In particular, the algorithms' behavior changes with respect to runtime as shown in Figs. 16,17 and 18.

Fig. 1 Fig. 2
Fig. 1 Illustration of Case (2.a) with ΔD i ≤ ΔD miss : The carry-over SW i 's job is illustrated in green and SW miss is illustrated in red.The light green and red shading represent LO i and LO miss respectively, whereas a light gray shading represents any previous higher-priority execution (i.e., of jobs with shorter deadlines).Solid upward arrows stand for the release times of LO i and LO miss , while dashed upward arrows represent the release times of SW i and SW miss (which are also the (virtual) deadlines of LO i and LO miss ).Solid downward arrows stand for the (real) deadlines of SW i and SW miss (Color figure online)

Fig. 5
Fig. 5 Weighted schedulability versus percentage of HI tasks for | | = 20 and 50% increase of HI execution demand

Fig. 9 Fig. 10
Fig.9Runtime versus total number of tasks for 30% HI tasks and 50% increase of HI execution demand

Fig. 17 Fig. 18
Fig. 17Runtime versus total number of tasks for 30% HI tasks and 50% increase of HI execution demand-uniform distribution of task periods