Skip to main content
Log in

Feasibility of on-line speed policies in real-time systems

  • Published:
Real-Time Systems Aims and scope Submit manuscript

Abstract

We consider a real-time system where a single processor with variable speed executes an infinite sequence of sporadic and independent jobs. We assume that job sizes and relative deadlines are bounded by C and \(\varDelta \) respectively. Furthermore, \(S_{\max }\) denotes the maximal speed of the processor. In such a real-time system, a speed selection policy dynamically chooses (i.e., on-line) the speed of the processor to execute the current, not yet finished, jobs. We say that an on-line speed policy is feasible if it is able to execute any sequence of jobs while meeting two constraints: the processor speed is always below \(S_{\max }\) and no job misses its deadline. In this paper, we compare the feasibility region of four on-line speed selection policies in single-processor real-time systems, namely Optimal Available \({\text{(OA)}}\) (Yao et al. in IEEE annual foundations of computer science, 1995), Average Rate \({\text{(AVR)}}\) (Yao et al. 1995), \({\text{(BKP)}}\) (Bansal in J ACM 54:1, 2007), and a Markovian Policy based on dynamic programming \({\text{(MP)}}\) (Gaujal in Technical Report hal-01615835, Inria, 2017). We prove the following results:

  • \( {\text{(OA)}}\) is feasible if and only if \(S_{\max } \ge C (h_{\varDelta -1}+1)\), where \(h_n\) is the n-th harmonic number (\(h_n = \sum _{i=1}^n 1/i \approx \log n\)).

  • \({\text{(AVR)}}\) is feasible if and only if \(S_{\max } \ge C h_\varDelta \).

  • \({\text{(BKP)}}\) is feasible if and only if \(S_{\max } \ge e C\) (where \(e = \exp (1)\)).

  • \({\text{(MP)}}\) is feasible if and only if \(S_{\max } \ge C\). This is an optimal feasibility condition because when \(S_{\max } < C\) no policy can be feasible.

This reinforces the interest of \({\text{(MP)}}\) that is not only optimal for energy consumption (on average) but is also optimal regarding feasibility.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Augustine J, Irani S, Swamy C (2004) Optimal power-down strategies. Symposium on Foundations of Computer Science, FOCS’04. IEEE, Rome, Italy, pp 530–539

  • Bansal N, Kimbrel T, Pruhs K (2007) Speed scaling to manage energy and temperature. J ACM 54:1

    Article  MathSciNet  Google Scholar 

  • Chen J, Stoimenov N, Thiele L (2009) Feasibility analysis of on-line DVS algorithms for scheduling arbitrary event streams. In: Real-time systems symposium, RTSS’09. Washington (DC), USA, pp 261–270

  • Gaujal B, Girault A, Plassart S (2017) Dynamic speed scaling minimizing expected energy consumption for real-time tasks. Technical Report hal-01615835, Inria

  • Jejurikar R, Gupta R (2004) Procrastination scheduling in fixed priority real-time systems. In: Conference on languages, compilers, and tools for embedded systems, LCTES’04. ACM, Washington (DC), USA, pp 57–66

  • Liu C, Layland J (1973) Scheduling algorithms for multiprogramming in hard real-time environnement. J ACM 20(1):46–61

    Article  Google Scholar 

  • Petrovitsch M (1901) Sur une manière d’étendre le théorème de la moyence aux équations différentielles du premier ordre. Math Ann 54(3):417–436

    Article  MathSciNet  Google Scholar 

  • Puterman ML (2005) Markov decision process: discrete stochastic dynamic programming, Wiley series in probability and, statistics edn. Wiley, New York

    Google Scholar 

  • Sutton RS, Barto AG (2018) Reinforcement learning: an introduction, 2nd edn. The MIT Press, New York

    MATH  Google Scholar 

  • Thiele L, Chakraborty S, Naedele M (2000) Real-time calculus for scheduling hard real-time systems. In: International Symposium on Circuits and Systems, ISCAS’00, IEEE, pp 101–104

  • Yao F, Demers A, Shenker S (1995) A scheduling model for reduced CPU energy. In: IEEE annual foundations of computer science, pp 374–382

Download references

Acknowledgements

This work has been partially supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01) funded by the French program Investissement d’avenir.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alain Girault.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Uniqueness of the solution of Eq. (39)

Let us rewrite Eq. (39) \(\forall t \in [k,k+1[\) as follow:

$$\begin{aligned} \pi (t) = \max _{v>t} \frac{w_k(v) - \int _k^{t} \pi (x) {\text{d}}x}{v-t}. \end{aligned}$$
(70)

The goal of this part is to prove that there exists a unique solution \(\pi \) for this equation. By doing an appropriate variable shift on Eq. (70), \(u=v-t\), we obtain:

$$\begin{aligned} \pi (t) = \sup _{u > 0} \frac{w_k(u+t)-\int _k^t \pi (x) {\text{d}}x}{u}. \end{aligned}$$
(71)

By defining \(W(t) = \int _k^t \pi (x) {\text{d}}x\), and integrating Eq. (71) between k and t, we obtain:

$$\begin{aligned} W(t) = \int _k^t \sup _{u > 0} \frac{w_k(u+s)- W(s)}{u} {\text{d}}s. \end{aligned}$$
(72)

Let us now define the function F(sx) as follow:

$$\begin{aligned} F(s,x) = \sup _{u>0} \left( \frac{f_s(u)-x}{u} \right) \end{aligned}$$
(73)

where \(f_s(\cdot ) = w_k(\cdot +s)\). Then \(f_s(\cdot )\) is such that:

  1. 1.

    \(f_s\) is an increasing function bounded by \(C\varDelta \in {\mathbb {R}} ^{+}\).

  2. 2.

    \(f_s(0) = w_k(s) = 0\) because \(w_k(k) = 0 \) by feasibility and no job arrives between k and s.

  3. 3.

    \(f_s\) satisfies:

    $$\lim_{\begin{subarray}{c}t \rightarrow 0 \\ t \ge 0 \end{subarray}} \frac{f_{s}(t)}{t} = 0$$
    (74)

    because \(w_k(s)\) is constant for \(s \in [k,k+1)\).

The function \(W(t) = \int _k^t \pi (u) {\text{d}}u\) also satisfies the following integro-differential equation:

$$\begin{aligned} W(t) = \int _k^t F(s,W(s)) {\text{d}}s. \end{aligned}$$
(75)

Lemma 2

There exists a unique solution W to Eq. (75).

Proof

First, let us show in Lemmas (3)–(4) that the function F(sx) is Lipschitz in x.

Lemma 3

Let \(t_0>0\) be the first time such that the sup of F(s, 0) is reached. Then F(sx) is a \(\frac{1}{t_0}\)-Lipschitz function in x.

Proof

For the proof of Lemma 3, we will note \(g(x)=F(s,x)\). Let \(x,y \in [0,a]\) (where a is an arbitrary positive number). We want to prove that:

$$\begin{aligned} \exists k \in {\mathbb {R}}, \, \vert g(x)-g(y) \vert \le k \vert x-y \vert . \end{aligned}$$
(76)

Let us compute the difference \(\vert g(x)-g(y) \vert \):

$$\begin{aligned} \vert g(x)-g(y) \vert= & {} \left| \sup _{t>0} \frac{f_s(t) - x}{t} - \sup _{t>0} \frac{f_s(t) - y}{t}\right| \end{aligned}$$
(77)

Since, by assumption, the function \(f_s(t)\) is bounded by a, the sup for g(x) is reached for a certain value of t, noted \(t_x\).

By definition of the \(\sup \), we have \(\sup _{t>0} \frac{f_s(t) - y}{t} \ge \frac{f_s(t_x) - y}{t_x}\), hence Eq. (77) becomes:

$$\begin{aligned} \vert g(x)-g(y)\vert\le & {} \left| \frac{f_s(t_x)-x}{t_x} - \frac{f_s(t_x)-y}{t_x} \right| \nonumber \\ \vert g(x)-g(y)\vert\le & {} \left| \frac{y-x}{t_x} \right| \end{aligned}$$
(78)

Now let us prove Lemma 4, which states that \(t_x\) is an increasing function in x.

Lemma 4

Let \(t_x\) be the function of x such that:

$$\begin{aligned} \forall a \in {\mathbb {R}}, \, \forall x \in [0,a], \, t_x: x \longmapsto \mathop {{{\,{\text{argmax}}\,}}}\limits _t \frac{f_s(t)-x}{t} . \end{aligned}$$

Then \(t_x\) is an increasing function of x.

Proof

Let \(x,y \in [0,a]\) such that \(x \le y\), and let \(t_y\) be such that:

$$\begin{aligned} \frac{f_s(t_y)-y}{t_y} = \max _t \left( \frac{f_s(t)-y}{t} \right) \end{aligned}$$
(79)

The goal is to prove Eq. (80) below:

$$\begin{aligned} \frac{f_s(t)-x}{t} \le \frac{f_s(t_y) - x}{t_y} . \end{aligned}$$
(80)

By definition of the \(\max \), we have for any defined function \(f_s\):

$$\begin{aligned} \forall t \in {\mathbb {R}}, \, f_s(t) \le \frac{f_s(t_y) - y}{t_y}t + y . \end{aligned}$$
(81)

We now define two lines:

  • the line \(L_1\) that corresponds to the slope for the maximal value of y, i.e., the line that links the points (0, y) and \((t_y,f_s(t_y))\); its equation corresponds to the left part of Eq. (81);

  • and the line \(L_2\) that links (0, x) on the ordinate axis and the point \((t,f_s(t))\).

The functions \(t \longmapsto L_1(t)\) and \(t \longmapsto L_2(t)\) correspond to all the points of their respective lines.

By definition of \(L_1(t)\), we have \(f_s(t) \le L_1(t)\). Moreover as time \(t \ge t_y\), by construction of line \(L_2\), we have \(L_2(t) \le L_1(t)\). Since \(x \le y\), we also have \(L_2(0) \le L_1(0)\). All these inequalities on some points of the two lines \(L_1\) and \(L_2\) imply that \(L_1(t_y) \ge L_2(t_y)\). The expressions of the functions \(L_1(t)\) and \(L_2(t)\) lead to the following inequality:

$$\begin{aligned} f_s(t_y) \ge \frac{f_s(t) - x}{t}t_y +x \, \, \Longleftrightarrow \, \, \frac{f_s(t_y) - x}{t_y} \ge \frac{f_s(t) - x}{t}. \end{aligned}$$

Equation (80) is therefore satisfied, and so the function \(x \longmapsto t_x\) is an increasing function. \(\square \)

Using Eq. (74), the fact that \(f_s\) is an increasing function, and the fact that \(f_s(0)=0\), the first time t such that \(F(t,0)>0\) is strictly larger than 0, and as we want to determine for all t the sup of F(t, 0), then \(t_0\) is strictly positive.

Since \(t_x\) is an increasing function of x by Lemma 4, and since \(t_0>0\), then Eq. (78) becomes:

$$\begin{aligned} \vert g(x)-g(y)\vert\le & {} \frac{1}{t_0}\vert y-x \vert . \end{aligned}$$
(82)

Equation (82) concludes that g is \(\frac{1}{t_0}\)-Lipschitz. \(\square \)

Since F(sx) is Lipschitz in x, the Picard-Lindelof theorem allows us to concludes that there exists a unique solution W(t) for the Eq. (75).

Therefore Eq. (75) can be rewritten as follow:

$$\begin{aligned} \pi (t)= & {} \sup _{u > 0} \frac{f_s(u)-W(t)}{u}. \end{aligned}$$
(83)

By Eq. (83), \(\pi \) is a function of W, so \(\pi \) is also unique. \(\square \)

Appendix 2: Concavity of the executed work by \( {\text{(OA)}}\) for a given w

In this appendix we provide a more exhaustive study of the speed policy \( {\text{(OA)}}\). We show that the work executed by \( {\text{(OA)}}\) is the convex envelope of the graph of the remaining work function w(.), when w(.) is fixed (i.e., all the jobs arrive at time 0). Using the same notation as in the previous appendix (the index \(k=0\) is dropped in \(w_0\)), we define \(W(t) =\int _0^t \pi ^ {\text{(OA)}} (u) {\text{d}}u\), the amount of work executed by \( {\text{(OA)}}\) from time 0 to time t. It is such that:

$$\begin{aligned} W(t)=\int _0^t \sup _{u \ge 0} \frac{w(u+s) - W(s)}{u} {\text{d}}s = \int _0^t \sup _{v \ge s} \frac{w(v) - W(s)}{v-s} {\text{d}}s . \end{aligned}$$
(84)

W(t) corresponds to the quantity of work executed between 0 and t, and the goal of this part is to show that W(t) is the smallest concave function that is above w(t).

Lemma 5

Let w be any real non-decreasing function that admits right-derivatives everywhere (not necessarily staircase), with \(w(0) = 0\). Then W(t) as defined in Eq. (84) satisfies the following properties:

  1. 1.

    W is continuous, \(W(0) = 0\), and \(\forall t \ge 0\), \(W(t)\ge w(t)\).

  2. 2.

    W is non-decreasing in w.

  3. 3.

    If w is concave, then \(W(t) = w(t)\).

  4. 4.

    W is concave.

  5. 5.

    \(W = {\widehat{w}}\) where \({\widehat{w}}\) is the convex hull of w.

Proof

  1. 1.

    W(t) being an integral from 0 to t, W is continuous, \(W(0) = 0\), and W has right-derivatives everywhere: \(W'_+(t) = \sup _{u \ge 0} \frac{w(u+t) - W(t)}{u}\). Let us denote by \(w'_+(\cdot )\) the right-derivative of w: \(w'_+(t) = \lim _{u\rightarrow 0, u\ge 0} (w(t+u) - w(t))/u\). Then \(w'_+(t) \le \sup _{u \ge 0} \frac{w(u+t) - w(t)}{u} \). Since \(w(0) = 0 = W(0)\), then by Petrovitsch Theorem on differential inequalities, Petrovitsch (1901), we have \(W(t) \ge w(t)\) for all \(t \ge 0\).

  2. 2.

    By definition of the function W, it is a non-decreasing function in w.

  3. 3.

    Let us suppose that w is concave. By replacing, in the right part of Eq. (84), W by w, one gets inside the integral:

    $$\begin{aligned} \sup _{v \ge s} \frac{w(v) - w(s)}{v-s}. \end{aligned}$$
    (85)

    Since w is concave by assumption, it is right and left differentiable at any point t. This means that \(w'_+\), the right-derivative of w, is decreasing. Therefore the sup is reached when v goes to s. We thus have:

    $$\begin{aligned} \sup _{v \ge s} \frac{w(v) - w(s)}{v-s} = w^{'}_{+}(s). \end{aligned}$$
    (86)

    By using Eq. (86) and replacing in Eq. (84), we obtain:

    $$\begin{aligned} \int _0^t \sup _{v \ge s} \frac{w(v) - w(s)}{v-s} {\text{d}}s = \int _0^t w^{'}_{+}(u) {\text{d}}u = w(t). \end{aligned}$$
    (87)

    The last equality in Eq. (87) is due to the fact that w is concave. Indeed, since w is concave, its derivative is defined on the whole interval [kt] except for a finite number of values u. The integral does not depend on these points, so we have Eq. (87).

    To conclude, w is a solution of Eq. (84) and so \(W=w\) by uniqueness of the solution.

  4. 4.

    For any \(t \ge 0\), let \(L_t\) be the right tangent of W at the point t. The equation of the line \(L_t\) is: \(L_t(v) = W(t) + W'_+(t) (v-t)\). Since \(W(t) \ge w(t)\), we have for all \(v \ge t\), \(L_t(v) = W(t) + W'_+(t) (v-t) \ge w(v)\).

    If we replace w by \(L_t\) in the definition of W, we get a new function \(W_L\) that is larger than W by item 2 and is equal to \(L_t\) by item 3. This means that W is below its right tangents.

    Now, this implies that W is concave: By contradiction, let \(x < y\) be such that \(\forall z \in [x,y]\), we have \(W(z) < A(z)\), where \(A(\cdot )\) is the affine interpolation between W(x) and W(z). Since W is below its right tangents, then \(W(y) \le W(z) + W'_+(z) (y-z)\). This implies that \(W'_+(z)\) is larger than the slope of \(A(\cdot )\): in other words, \(W'_+(z) \ge \frac{W(y) - W(x)}{y-x}\). By integrating this inequality from x to z, we get \(W(z) \ge A(z)\). This contradicts the initial assumption that \(W(z) < A(z)\).

    The final conclusion is that W is always above its affine interpolation, hence W is concave.

  5. 5.

    Let us prove first that \(W \ge {\widehat{w}}\). By definition of the convex hull, we know that \(w \le {\widehat{w}}\), and as W is an increasing function in w, \(W \le W_{{\widehat{w}}}\), where \(W_{{\widehat{w}}}\) is the function W where w is replaced by \({\widehat{w}}\). Since \({\widehat{w}}\) is concave, we then have by item 3 the fact that \({\widehat{w}} = W_{{\widehat{w}}}\). This implies:

    $$\begin{aligned} W \le {\widehat{w}}. \end{aligned}$$

    Now we prove the other inequality, i.e., that \({\widehat{w}} \le W\). By item 1, we get \(W \ge w\). By item 4, we know that W is concave. Since \({\widehat{w}}\) is the smallest concave function above w, we finally have:

    $$\begin{aligned} W \ge {\widehat{w}}. \end{aligned}$$

    We therefore conclude that:

    $$\begin{aligned} W = {\widehat{w}}. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gaujal, B., Girault, A. & Plassart, S. Feasibility of on-line speed policies in real-time systems. Real-Time Syst 56, 254–292 (2020). https://doi.org/10.1007/s11241-020-09347-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11241-020-09347-y

Keywords

Navigation