Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Timed regular expressions (tre), introduced in [3, 4], constitute a formalism for expressing patterns in timed behaviors in a compact and natural way. They augment classical regular expressions with timing constraints and as such they provide an alternative specification style to real-time temporal logics such as MTL [10]. We believe that such expressions have numerous applications in many domains such as runtime verification, robotics, medical monitoring and circuit analysis [7, 9].

For a given expression \(\varphi \) and input signal w, timed pattern matching means computing the match set \(\mathcal {M}(\varphi , w)\) consisting of all pairs \((t,t')\) of time instants such that the segment of w between t and \(t'\) satisfies the expression \(\varphi \). In [17] we showed how to compute \(\mathcal {M}(\varphi , w)\) offline, assuming the input signal to be completely available before the matching. In this paper we develop an online matching algorithm where the input is presented incrementally and matches are computed on the fly. An online procedure can be used to monitor real systems during their actual executions (in contrast with monitoring simulations) and alert the user in real time. In addition, an online procedure can reduce memory requirements, discarding signals and intermediate matches when those are no longer needed.

The online pattern matching procedure that we develop in this paper is built upon the notion of derivatives of regular expressions, introduced by Brzozowski in 1964 [6]. In essence, the derivative of an expression with respect to a letter or word, tells us what remains to be observed in order to reach acceptance. In this sense it is very similar to the tableaux construction used to build automata from temporal logic formulae. Derivatives provide an elegant solution for problems of language membership [14], pattern matching [13, 16] and automaton construction [1, 5, 6] and have been observed to be naturally suitable for monitoring behaviors of systems [12, 15]. The original notion of the derivative that we recall in Sect. 2 is based on discrete time and requires a careful adaptation to dense time. Moreover, as we will explain, matching is more complex than acceptance (of the word or its prefixes) and this has some implications on associating derivatives with rewrite rules.

In Sect. 3 we modify the definition of signals, one of the commonly-used formalisms to express timed behaviors, so as to lift the theory of derivatives to the timed setting. Signals (and sequences) are traditionally defined to start at time zero and when two signals are concatenated as in \(w=u\cdot v\), the second argument v is shifted forward in time, to start at the end of u. In contrast, we define signals in absolute time, each having its own fixed starting point. In this setting concatenation becomes a partial function, defined only when the domains of definition of the two signals fit. We also introduce a special place holder symbol \(\checkmark \) and define extended signals where all letters in some prefix have been replaced by this symbol.

We then adapt timed regular expressions to represent sets of extended signals using the absolute time semantics. The regular expressions of [3, 4, 17] are obtained as a syntactic sub-class denoting “pure” \(\checkmark \)-free signals, used for the initial specification. The more general expressions are used to represent intermediate stages during the incremental computation of the match set.

In Sect. 4 we introduce our main technical contribution: the definition and computation of derivatives of left-reduced timed regular expressions with respect to a constant signal of arbitrary duration and all its factors. We apply this result to solve the problem of online timed pattern matching in Sect. 5 where we observe an input signal consisting of a finite concatenation of constant signals. We give a complete example of a run of our algorithm and briefly mention our implementation and its performance.

2 Preliminaries

Let \(\varSigma ^{*}\) be the set of all finite words over alphabet \(\varSigma \) with \(\epsilon \) denoting the empty word. A language \(\mathcal {L}\) over \(\varSigma \) is a subset of \(\varSigma ^{*}\). The syntax of regular expressions over \(\varSigma \) is given by the following grammar:

$$\begin{aligned} r := \varnothing \ |\ \epsilon \ |\ a\ |\ r_{1} \cdot r_{2}\ |\ r_{1} \vee r_{2}\ |\ r^{*} \end{aligned}$$

where \(a\in \varSigma \). A regular expression r specifies a regular language \(\llbracket r \rrbracket \), inductively defined as follows:

$$\begin{aligned} \begin{array}{rclcrcl} \llbracket \varnothing \rrbracket &{} = &{} \emptyset &{} \quad \quad &{} \llbracket r_{1}\cdot r_{2}\rrbracket &{} = &{} \llbracket r_{1}\rrbracket \cdot \llbracket r_{2} \rrbracket \\ \llbracket \epsilon \rrbracket &{} = &{} \{ \epsilon \} &{} &{} \llbracket r_{1}\vee r_{2}\rrbracket &{} = &{} \llbracket r_{1}\rrbracket \cup \llbracket r_{2} \rrbracket \\ \llbracket a \rrbracket &{} = &{} \{ a\} &{} &{} \llbracket r^{*}\rrbracket &{} = &{} \llbracket r\rrbracket ^{*}\\ \end{array} \end{aligned}$$

In some cases it is important to determine whether or not the language of a regular expression r contains the empty word \(\epsilon \). For this purpose an empty word extraction function \(\nu \) (also known as the nullability predicate) is defined such as

$$\begin{aligned} \nu (r) = {\left\{ \begin{array}{ll} \epsilon &{} \text { if } \epsilon \in \llbracket r \rrbracket \\ \varnothing &{} \text { otherwise } \end{array}\right. } \end{aligned}$$

This function which extracts \(\epsilon \) from r if it exists, is computed inductively by the following rules:

$$\begin{aligned} \begin{array}{rclcrcl} \nu (\varnothing ) &{} = &{} \varnothing &{} \quad \quad &{} \nu (r_{1}\cdot r_{2}) &{} = &{} \nu (r_{1})\cdot \nu (r_{2})\\ \nu (\epsilon ) &{} = &{} \epsilon &{} &{} \nu (r_{1}\vee r_{2}) &{} = &{} \nu (r_{1})\vee \nu (r_{2}) \\ \nu (a) &{} = &{} \varnothing &{} &{} \nu (r^{*}) &{} = &{} \epsilon \end{array} \end{aligned}$$

Definition 1

(Derivative). The derivative of a language \(\mathcal {L}\) with respect to a word u is defined as

$$ D_{u}(\mathcal {L}) := \{\ v\in \varSigma ^{*}\ |\ u\cdot v \in \mathcal {L} \}. $$

In [6] Brzozowski applied the notion of derivatives to regular expressions and proved that the derivative \(D_{a}(r)\) of an expression r with respect to a letter a can be computed recursively using the following syntactic rewrite rules:

$$\begin{aligned} \begin{array}{rclcrcl} D_{a}(\varnothing ) &{} = &{} \varnothing &{} \quad \quad &{} D_{a}( r_{1}\cdot r_{2}) &{} = &{} D_{a}( r_{1})\cdot r_{2}\ \vee \ \nu ( r_{1})\cdot D_{a}( r_{2})\\ D_{a}(\epsilon ) &{} = &{} \varnothing &{} &{} D_{a}( r_{1}\vee r_{2}) &{} = &{} D_{a}( r_{1}) \vee D_{a}( r_{2} )\\ D_{a}(a) &{} = &{} \epsilon &{} &{} D_{a}( r^{*}) &{} = &{} D_{a}( r)\cdot r^{*}\\ D_{a}(b) &{} = &{} \varnothing \\ \end{array} \end{aligned}$$

These rules are extended for words by letting \(D_{a\cdot w}(r) = D_{w}(D_{a}(r))\). By definition, membership \(w \in \mathcal {L}\) is equivalent to \(\epsilon \in D_{w}(\mathcal {L})\). Hence to check, for example, whether abc is in the language of the expression \(\varphi = a^{*}\cdot (b\cdot c)^{*}\) we compute \(D_{abc}(\varphi ) = D_{c}(D_{b}(D_{a}(\varphi )))) = (b\cdot c)^{*}\) as follows:

figure a

and since \(\nu ((b\cdot c)^{*}) = \epsilon \), \(abc\in \llbracket \varphi \rrbracket \).

It is of course not a coincidence that this procedure resembles the reading of the word by an automaton where derivatives correspond to states and those that contain \(\epsilon \) correspond to accepting states. Hence we can report membership in \(\llbracket \varphi \rrbracket \) of w as well as the membership of all its prefixes. We can do it incrementally as new letters arrive.

Matching is more involved as we are interested in the membership of all factors of w, starting at arbitrary positions. Thus, having read j letters of w, the state of a matching algorithm should contain all the derivatives by w[i..j], \(i\le j\). When letter \(j+1\) is read, these derivatives are updated to become derivatives by \(w[i..j+1]\), new matches are extracted and a new process for matches that start at \(j+1\) is spawned. Table 1 illustrates the systematic application of derivatives to find segments of \(w=abcbc\) that match \(\varphi = a^{*}\cdot (b\cdot c)^{*}\). The table is indexed by the start position (rows) and end position (columns) of the segments with respect to which we derive. Derivatives that contain \(\epsilon \) correspond to matches and their time indices constitute the match set \(\{(1,1), (1,3), (1,5), (2,3), (2,5), (4,5)\}\). In a discrete finite-state setting there are finitely many such derivatives but this is not the case for timed systems.Footnote 1

Table 1. Pattern matching using derivatives for \(w=abcbc\) and \(\varphi = a^{*}\cdot (b\cdot c)^{*}\). Entry (ij) represents the derivative with respect to w[ij]. Derivatives containing \(\epsilon \) are shaded with green. The state of an online matching algorithm after reading j symbols is represented in column j.

In dense time, the analogue of the arrival of a new letter is the arrival of a constant segment of the signal \(w[t_1,t_2]\). When this occurs, the state of the algorithm should be updated to capture all derivatives by segments of the form \(w[t,t_2]\) for \(t<t_2\) and all matches ending in some \(t<t_2\) should be extracted. The technique for representing and manipulating such an uncountable number of derivative together with their corresponding time segments is the main contribution of this paper.

3 Signals, Timed Languages and Expressions

We consider an alphabet \(\varSigma =\mathbb {B}^m\) which is the set of valuations of a set of propositional variables \(P=\{p_1\ldots ,p_m\}\). We define signals not as free floating objects but anchor them in absolute time.

Definition 2

(Signals). A signal over an alphabet \(\varSigma \) is a piecewise-constant function \(w : [t_{1}, t_{2}) \longrightarrow \varSigma \), where \(t_{1} \le t_{2}\in \mathbb {R}_{\ge 0}\) and w admits a finite number of discontinuities. The time domain of the signal and its beginning and end times are denoted as

$$ \mathrm{dom}(w) = [t_{1},t_{2}) = [\tau _{1}(w),\tau _{2}(w)). $$

The empty signal \(\epsilon \) is the unique signal satisfying \(\mathrm{dom}(w)= \emptyset \). The duration of w is \(|w| = \tau _{2}(w) - \tau _{1}(w)\) and \(|\epsilon | = 0\). We often view the boundary points of a signal as a pair, \(\tau (w) = (\tau _{1}(w), \tau _{2}(w))\).

We use \(w[t, t']\) to denote the restriction of w to an interval \([t, t') \subseteq dom (w)\) and let \(Sub (w) = \{ w[t,t']\ |\ \tau _{1}(w)\le t < t' \le \tau _{2}(w)\}\) be the set of sub-signals (factors, segments) of w. Concatenation is restricted to signals that meet, that is, one ends where the other starts.

Definition 3

(Meets and Concatenation). Signal \(w_{1}\) meets signal \(w_{2}\) when \(w_1=\epsilon \) or \(w_2=\epsilon \) or \(\tau _{2}(w_{1}) = \tau _{1}(w_{2})\). Concatenation is a partial function such that \(w_{1}\cdot w_{2}\) is defined only if \(w_{1}\) meets \(w_{2}\):

$$\begin{aligned} w_{1}\cdot w_{2}(t) = {\left\{ \begin{array}{ll} w_{1}(t) &{} \text {if } t \in \mathrm{dom}(w_{1}) \\ w_{2}(t) &{} \text {if } t \in \mathrm{dom}(w_{2}) \end{array}\right. } \end{aligned}$$

The empty signal \(\epsilon \) is the neutral element for concatenation: \(\epsilon \cdot w = w\cdot \epsilon = w\). The set of signals thus defined can be made a monoid by making concatenation total by introducing a new element \(\bot \) and letting \(w_1 \cdot w_2=\bot \) when the signals do not meet. The newly introduced element is an absorbing zero satisfying \(\bot \cdot w = w\cdot \bot = \bot \).

The variability (logical length) of a signal w is the minimal n such that w can be written as \(w = w_{1}\cdot w_{2}\cdots w_{n}\) where each \(w_{i}\) is a constant signal. We use notations \(\varSigma ^{(*)}\), \(\varSigma ^{(+)}\) and \(\varSigma ^{(n)}\) to denote the set of all signals, non-empty signals and signals of variability n, respectively. In particular, \(\varSigma ^{(1)}\) is the set of all constant signals. Sets of signals are referred to as signal languages on which Boolean operations as well as concatenation and star are defined naturally. Finally we extend the time restriction operation of [4] which constrains the duration of signals, to apply also to their time domains. The language \(^{K}_{J}\langle \mathcal {L}\rangle _{I}\) where IJK are intervals of non-negative reals, is a subset of \(\mathcal {L}\) consisting of signals with duration in I, beginning in \(t_1\in J\) and ending in \(t_2\in K\). We omit the corresponding interval when there is no restriction on beginning, ending or duration.

We are interested in representing a family of sub-signals of a n-variability signal \(w = w_{1}\dots w_{n}\) starting in segment i and ending in segment j, that is, \(Sub _{[i:j]}(w):= \{w[t,t']\ |\ t\in dom(w_{i}) \text { and } t'\in dom(w_{j})\}\). It can be easily verified that

$$Sub _{[i:j]}(w)= Sub (w_i)\cdot w_{i+1} \cdots Sub (w_j) = Sub (w_i)\cdot Sub (w_{i+1}) \cdots Sub (w_j).$$

In the classical discrete setting, the derivative \(D_{a}\) is associated with a rewrite rule \(a\rightarrow \epsilon \) and a word w is accepted if it can be transformed into \(\epsilon \) by successive rewritings. For the purpose of timed matching we need a more length-preserving view where reading a corresponds to a rule \(a\rightarrow \checkmark \) where \(\checkmark \) is a special place-holder that indicates that a has been processed. Acceptance then corresponds to the rewriting of w into a signal \(w': dom(w) \mapsto \checkmark \). We let \(\varSigma _\checkmark =\varSigma \cup \{\checkmark \}\) and define extended signals which are signals over \(\varSigma _{\checkmark }\), as well as some subclasses of those.

Definition 4

(Extended Signals). An extended signal over alphabet \(\varSigma \) is a function \(w:[t,t')\rightarrow \varSigma _\checkmark \). An extended signal w is left-reduced if \(w\in \checkmark ^{(*)}\cdot \varSigma ^{(*)}\). A left-reduced signal w is pure if \(w\in \varSigma ^{(*)}\) and reduced if \(w\in \checkmark ^{(*)}\).

We use initial Greek letters to denote reduced signals and hence a left-reduced signal w will be written as \(w = \alpha \cdot v\) where \(\alpha \) is a reduced signal and v is a pure signal.

Definition 5

(Left Reduction). A reduction rule R(u) for a signal \(u\in \varSigma ^{(*)}\) is a pair \((u, \gamma )\) such that \(\gamma \in \checkmark ^{(*)}\) and \(\mathrm{dom}(u) = \mathrm{dom}(\gamma )\). The left reduction of a left-reduced signal language \(\mathcal {L}\) with respect to u is:

$$\begin{aligned} \delta _{u}(\mathcal {L}) := \{\ \alpha \gamma w\ |\ \alpha u w\in \mathcal {L},\ \alpha \in \checkmark ^{(*)} \ and \ w\in \varSigma ^{(*)} \} \end{aligned}$$

We use operation \(\delta _u(\mathcal {L})\) in a similar way \(D_u(\mathcal {L})\) is used in the classical setting but with one important difference. When \(v=D_u(w)\) the length of the word is reduced, that is, \(|v|=|w|-|u|\), while when \(v=\delta _u(w)\) the domains (and hence durations) of v and w are the same. Consequently, unlike the classical case where membership of w in \(\mathcal {L}\) amounts to \(\epsilon \in D_w(\mathcal {L})\), here the membership is equivalent to \(\gamma \in \delta _w(\mathcal {L})\) where \(\gamma \) is a reduced signal of the same domain as w. It is not difficult to check that \(\delta _{u_{1}\cdot u_{2}}(\mathcal {L}) = \delta _{u_{2}}(\delta _{u_{1}}(\mathcal {L}))\) and sometimes we denote by \(\delta _{S}\) the left reduction with respect to a set of signals.

Example 1

Consider a signal language \(\mathcal {L} = \{ w_{1}, w_{2}\}\) such that

$$\begin{aligned} w_{1}(t)&= {\left\{ \begin{array}{ll} a &{} \text {if } t \in [0, 3) \\ b &{} \text {if } t \in [3, 5) \end{array}\right. }&w_{2}(t)&= {\left\{ \begin{array}{ll} a &{} \text {if } t \in [0, 2) \\ b &{} \text {if } t \in [2, 5) \end{array}\right. } \end{aligned}$$

In Fig. 1 we illustrate a left reduction operation \(\delta _{u_{3}}(\delta _{u_{2}}(\delta _{u_{1}}(\mathcal {L}))) = \{ w_{1}'''\}\) with respect to \(u = u_{1}u_{2}u_{3}\) with \(u_{1}: [0,1)\mapsto a\), \(u_{2}: [1,3)\mapsto a\) and \(u_{3}: [3,5)\mapsto b\). Since \(w_{1}'''\) is a reduced signal and \(\tau (u)=\tau (w_{1}''')\), \(u \in \mathcal {L}\).

Fig. 1.
figure 1

A left reduction example.

We now introduce timed regular expressions to describe sets of signals and extended signals using the absolute time semantics. Note that the intersection operator, which is considered a syntactic sugar in the classical theory, adds expressiveness in the timed setting [4].

Definition 6

(Extended Timed Regular Expressions). Extended timed regular expressions are defined by the following grammar:

$$\begin{aligned} \varphi := \varnothing \ |\ \epsilon \ |\ p\ |\ \checkmark \ |\ \varphi _{1} \cdot \varphi _{2}\ |\ \varphi _{1} \vee \varphi _{2}\ |\ \varphi _{1} \wedge \varphi _{2}\ |\ \varphi ^*\ |\ ^{K}_{J}\langle \varphi \rangle _I \end{aligned}$$

where p is a proposional variable in P and IJK are intervals of \(\mathbb {R}_{\ge 0}\).

The semantics of the expressions is defined by the following rules (we use \(a\models p\) to denote the fact that p holds at a):

$$\begin{aligned} \begin{array}{rcl} \llbracket \varnothing \rrbracket &{} = &{} \emptyset \\ \llbracket \epsilon \rrbracket &{} = &{} \{ \epsilon \} \\ \llbracket p \rrbracket &{} = &{} \{ w:[t, t')\rightarrow \varSigma \ |\ 0\le t<t' \text { and } \forall t'' \in [t,t').\ w(t'')\models p \}\\ \llbracket \checkmark \rrbracket &{} = &{} \{ w:[t, t')\rightarrow \{\checkmark \} \ |\ 0\le t<t' \}\\ \llbracket \varphi \cdot \psi \rrbracket &{} = &{} \llbracket \varphi \rrbracket \cdot \llbracket \psi \rrbracket \\ \llbracket \varphi \vee \psi \rrbracket &{} = &{} \llbracket \varphi \rrbracket \cup \llbracket \psi \rrbracket \\ \llbracket \varphi \wedge \psi \rrbracket &{} = &{} \llbracket \varphi \rrbracket \cap \llbracket \psi \rrbracket \\ \llbracket \varphi ^* \rrbracket &{} = &{} \bigcup \limits ^{\infty }_{i=0} \llbracket \varphi \rrbracket ^{i}\\ \llbracket ^{K}_{J}\langle \varphi \rangle _I \rrbracket &{} = &{} \{w\ |\ w \in \llbracket \varphi \rrbracket ,\ |w| \in I,\ w\not = \epsilon \rightarrow (\tau _{1}(w)\in J \wedge \tau _{2}(w)\in K)\} \\ \end{array} \end{aligned}$$

A signal language is regular if it can be represented by a timed regular expression.

The syntax in Definition 6 allows to define sets including extended signals with arbitrary interleavings of letters and \(\checkmark \). Below we define three syntactic classes of expressions. The first class, called pure (or original) timed regular expressions, corresponds almost the same syntax of expressions seen in [3, 4, 17]. Pure expressions are \(\checkmark \)-free and do not place any restriction on the absolute beginning and ending values over their sub-expressions. The second class is reduced timed regular expressions which is formed using the \(\checkmark \) symbol only. Lastly we have left-reduced timed regular expressions, obtained as compositions of reduced and pure expressions satisfying some conditions.

Definition 7

(Syntactic Classes). A timed regular expression \(\varphi \) belongs to the classes of reduced, pure or left-reduced timed regular expressions if functions r?, p? or lr?, respectively, evaluate to true in the following table.

figure b

Trivially any reduced expression \(\psi \) and any pure expression \(\varphi \) represent reduced and pure signal languages such that \(\llbracket \psi \rrbracket \subseteq \checkmark ^{(*)}\) and \(\llbracket \varphi \rrbracket \subseteq \varSigma ^{(*)}\). For left-reduced expressions we do not allow concatenation and star operations on arbitrary left-reduced expressions as in Definition 7 because left-reduced languages are not closed under concatenation. By doing that we have the following result.

Proposition 1

The language \(\llbracket \varphi \rrbracket \) of a left-reduced timed regular expression \(\varphi \) is an extended signal language such that \(\llbracket \varphi \rrbracket \subseteq \checkmark ^{(*)}\cdot \varSigma ^{(*)}\).

Proof

For the concatenation \(\varphi _{1}\cdot \varphi _{2}\) we have two possibilities: (1) \(\llbracket \varphi _{1}\rrbracket \subset \checkmark ^{(*)}\cdot \varSigma ^{(*)}\) and \(\llbracket \varphi _{2}\rrbracket \subset \varSigma ^{(*)}\); (2) \(\llbracket \varphi _{1}\rrbracket \subset \checkmark ^{(*)}\) and \(\llbracket \varphi _{2}\rrbracket \subset \checkmark ^{(*)}\cdot \varSigma ^{(*)}\). For both possibilities, we have \(\llbracket \varphi _{1}\cdot \varphi _{2}\rrbracket = \llbracket \varphi _{1}\rrbracket \cdot \llbracket \varphi _{2}\rrbracket \subset \checkmark ^{(*)}\cdot \varSigma ^{(*)}\). Other cases are straightforward by following the definitions.

A comprehensive study on regular algebra extended with intersection operation can be found in [2]. We now mention some algebraic rules relative to the time restriction operator. It is shown in [17] how the right hand side of following equations can be computed from the corresponding left hand side.

$$ ^{K_{1}}_{J_{1}}\langle \checkmark \rangle _{I_{1}}\cdot \ ^{K_{2}}_{J_{2}}\langle \checkmark \rangle _{I_{2}} =\ ^{K_{3}}_{J_{3}}\langle \checkmark \rangle _{I_{3}} ~~~~~~\text{ and }~~~~~~ ^{K_{1}}_{J_{1}}\langle \checkmark \rangle _{I_{1}}\wedge \ ^{K_{2}}_{J_{2}}\langle \checkmark \rangle _{I_{2}} =\ ^{K_{3}}_{J_{3}}\langle \checkmark \rangle _{I_{3}} $$

for some intervals \(I_3\), \(J_3\) and \(K_3\), and

$$\begin{aligned} (\underset{i=1}{\overset{m}{\bigvee }}\ ^{K_{i}}_{J_{i}} \langle \checkmark \rangle _{I_{i}})^{+} = \underset{i=1}{\overset{n}{\bigvee }}\ ^{K'_{i}}_{J'_{i}}\langle \checkmark \rangle _{I'_{i}}\quad \text { for some } m,n\in \mathbb {N} \end{aligned}$$

Therefore we can simplify timed regular expressions further using these equations and procedures.

4 Derivatives of Left-Reduced Timed Regular Expressions

We now introduce, semantically and syntactically, a derivative operation for left-reduced signal languages and expressions based on the left reduction operation. Since our goal is to solve the dense time matching problem, we have to operate on sets of signals and define derivatives more symbolically. Therefore we define the derivative \(\varDelta _{v}\) to correspond to the left reduction with respect to all factors of v.

Definition 8

(Dense Derivative). The derivative \(\varDelta _{v}(\mathcal {L})\) of a left-reduced language \(\mathcal {L}\) with respect to a constant signal \(v\in \varSigma ^{(1)}\) is defined as follows:

$$\begin{aligned} \varDelta _{v}(\mathcal {L}) := \bigcup _{u\in \text {Sub}(v)}\delta _{u}(\mathcal {L}) \end{aligned}$$

As mentioned previously, reduced signals will provide the output of our matching procedure. Their existence will be the witness of a match and their time domains will indicate its position in the signal.

Definition 9

(Extraction). The extraction \(\text {xt}(\mathcal {L})\) of a left-reduced signal language \(\mathcal {L}\) is

$$\begin{aligned} \text {xt}(\mathcal {L}) := \{\ \alpha \ |\ \alpha \in \checkmark ^{(*)} \cap \mathcal {L}\} \end{aligned}$$

The following result shows that \(\text {xt}\) can be computed syntactically for left-reduced timed regular expressions.

Theorem 1

(Extraction Computation). For a given left-reduced timed regular expression \(\varphi \), applying the following rules recursively yields an expression \(\psi \) such that \(\llbracket \psi \rrbracket =\text {xt}(\llbracket \varphi \rrbracket )\).

$$\begin{aligned} \begin{array}{ccc} \begin{array}{lcl} \text {xt}(\varnothing ) &{} = &{} \varnothing \\ \text {xt}(\epsilon ) &{} = &{} \epsilon \\ \text {xt}(p) &{} = &{} \varnothing \\ \text {xt}(\checkmark ) &{} = &{} \checkmark \\ \end{array} &{} ~~~~~~~~~~~ &{} \begin{array}{lcl} \text {xt}(\psi _{1}\cdot \psi _{2}) &{} = &{} \text {xt}(\psi _{1})\cdot \text {xt}(\psi _{2}) \\ \text {xt}(\psi _{1}\vee \psi _{2}) &{} = &{} \text {xt}(\psi _{1})\vee \text {xt}(\psi _{2}) \\ \text {xt}(\psi _{1}\wedge \psi _{2}) &{} = &{} \text {xt}(\psi _{1})\wedge \text {xt}(\psi _{2}) \\ \text {xt}(^{K}_{J}\langle \psi \rangle _I) &{} = &{} ^{K}_{J}\langle \text {xt}(\psi ) \rangle _I\\ \text {xt}(\psi ^{*}) &{} = &{} (\text {xt}(\psi ))^{*} \\ \end{array} \end{array} \end{aligned}$$

Proof

We proceed by induction and only look at the case of concatenation, other cases are similar. For any expressions \(\varphi _1\), \(\varphi _2\) it holds

$$\begin{aligned} {\begin{array}{rcl} \llbracket \text {xt}(\varphi _{1}\cdot \varphi _{2}) \rrbracket &{} = &{} \{ \alpha \ |\ \alpha \in \checkmark ^{(*)} \text { and } \alpha \in \llbracket \varphi _{1}\cdot \varphi _{2}\rrbracket \}\\ &{} = &{} \{ \alpha _{1}\alpha _{2}\ |\ \alpha _{1}, \alpha _{2} \in \checkmark ^{(*)},\ \alpha _{1} \in \llbracket \varphi _{1}\rrbracket \text { and } \alpha _{2} \in \llbracket \varphi _{2}\rrbracket \}\\ &{} = &{} \{ \alpha _{1}\ |\ \alpha _{1} \in \checkmark ^{(*)} \text { and } \alpha _{1} \in \llbracket \varphi _{1}\rrbracket \}\cdot \{ \alpha _{2}\ |\ \alpha _{2} \in \checkmark ^{(*)} \text { and } \alpha _{2} \in \llbracket \varphi _{2}\rrbracket \}\\ &{} = &{} \llbracket \text {xt}(\varphi _{1}) \rrbracket \cdot \llbracket \text {xt}(\varphi _{2}) \rrbracket \end{array}} \end{aligned}$$

Example 2

Consider a left-reduced expression \(\varphi := \big \langle \ ^{[0,3]}_{[0,3]}\langle \checkmark \rangle _{[0,3]}\cdot p^{*} \big \rangle _{[0,2]}\). Applying Theorem 1 we extract from \(\varphi \) a reduced expression \(\psi \) such that \(\psi = \big \langle \ ^{[0,3]}_{[0,3]}\langle \checkmark \rangle _{[0,3]} \big \rangle _{[0,2]}\). Expression \(\psi \) can be simplified further to \(^{[0,3]}_{[0,3]}\langle \checkmark \rangle _{[0,2]}\).

We now state our main result concerning derivatives of left-reduced timed regular expressions.

Theorem 2

(Derivative Computation). Given a left-reduced timed regular expression \(\varphi \) and a constant signal \(v:[t, t')\mapsto a\), applying the following rules yields an expression \(\psi \) such that \(\llbracket \psi \rrbracket = \varDelta _{v}(\llbracket \varphi \rrbracket )\).

$$\begin{aligned} \begin{array}{rcl} \varDelta _{v}(\varnothing ) &{} = &{} \varnothing \\ \varDelta _{v}(\epsilon ) &{} = &{} \varnothing \\ \varDelta _{v}(\checkmark ) &{} = &{} \varnothing \\ \varDelta _{v}(p) &{} = &{} {\left\{ \begin{array}{ll} \varGamma \vee \varGamma \cdot p &{} \ if \ a \models p \ where \ \varGamma := \ ^{[t,t']}_{[t,t']}\langle \checkmark \rangle _{[0,t'-t]}\\ \varnothing &{} \ otherwise \ \end{array}\right. } \\ \varDelta _{v}(\psi _{1}\cdot \psi _{2}) &{} = &{} \varDelta _{v}(\psi _{1})\cdot \psi _{2}\ \vee \ \text {xt}\big (\psi _{1} \vee \varDelta _{v}(\psi _{1})\big )\cdot \varDelta _{v}(\psi _{2})\\ \varDelta _{v}(\psi _{1}\vee \psi _{2}) &{} = &{} \varDelta _{v}(\psi _{1}) \vee \varDelta _{v}(\psi _{2}) \\ \varDelta _{v}(\psi _{1}\wedge \psi _{2}) &{} = &{} \varDelta _{v}(\psi _{1}) \wedge \varDelta _{v}(\psi _{2}) \\ \varDelta _{v}(^{K}_{J}\langle \psi \rangle _I) &{} = &{} ^{K}_{J}\langle \varDelta _{v}(\psi ) \rangle _I\\ \varDelta _{v}(\psi ^{*}) &{} = &{} \text {xt}(\varDelta _{v}(\psi ))^{*}\cdot \varDelta _{v}(\psi )\cdot \psi ^{*} \\ \end{array} \end{aligned}$$

Proof

By semantic definition \(\varDelta _{v}(\varphi ) = \{\ \alpha \gamma w\ |\ \alpha uw \in \llbracket \varphi \rrbracket \text { and } (u, \gamma )\in \text {RSub}(v)\}\) where \(\text {RSub}(v) := \{\ R(u)\ |\ u\in \text {Sub}(v)\}\). We proceed by induction on the structure of \(\varphi \). In the following we tend to use languages and expressions interchangeably, when in the interest of readability. Consider the cases:

  • For \(\varphi = \varnothing \), \(\varphi = \epsilon \) and \(\varphi = \checkmark \) : for all cases \(\alpha uw \notin \llbracket \varphi \rrbracket \) therefore \(\varDelta _{v}(\varphi ) = \varnothing \).

  • For \(\varphi = p\) : It needs that \(\alpha = \epsilon \) and \(u\in \llbracket p \rrbracket \). Then, \(\alpha uw \in \llbracket p \rrbracket \) can be satisfied if either \(w = \epsilon \) or \(w\in \llbracket p \rrbracket \). By applying definitions, we get

    $$\begin{aligned} \begin{array}{rcl} \varDelta _{v}(p)&{} = &{} \{\ \gamma \ |\ u\in \llbracket p \rrbracket \text { and } (u, \gamma )\in \text {RSub}(v)\}\ \cup \\ &{} &{} \{\ \gamma w\ |\ u\in \llbracket p \rrbracket ,\ w\in \llbracket p \rrbracket \text { and } (u, \gamma )\in \text {RSub}(v) \}\\ &{} = &{} \varGamma \vee \varGamma \cdot \{ w\ |\ w\in \llbracket p \rrbracket \}\\ &{} = &{} \varGamma \vee \varGamma \cdot p \end{array} \end{aligned}$$

    where the expression \(\varGamma \) is \(^{[t,t']}_{[t,t']}\langle \checkmark \rangle _{[0,t'-t]}\). Hence, we have \(\varDelta _{v}(p) = \varGamma \vee \varGamma \cdot p\) if \(u\in \llbracket p \rrbracket \), otherwise \(\varDelta _{v}(p) = \varnothing \). The condition \(u\in \llbracket p \rrbracket \) can be easily checked by testing \(a\;{\models }\;p\).

  • For \(\varphi = \varphi _{1}\cdot \varphi _{2}\) : \(\alpha uw \in \llbracket \varphi _{1}\cdot \varphi _{2} \rrbracket \) should be satisfied. There are three possibilities to split \(\alpha u w\) in dense time:

    • \(\centerdot \) It can be split up into \(\alpha uw_{1}\in \llbracket \varphi _{1}\rrbracket \) and \(w_{2}\in \llbracket \varphi _{2}\rrbracket \).

      $$\begin{aligned} \begin{array}{rcl} \varDelta _{v}(\varphi ) &{} = &{} \{ \alpha \gamma w_{1}w_{2}\ |\ \alpha u w_{1}\in \llbracket \varphi _{1}\rrbracket ,\ w_{2}\in \llbracket \varphi _{2}\rrbracket \text { and } (u, \gamma )\in \text {RSub}(v)\}\\ &{} = &{} \{ \alpha \gamma w_{1}\ |\ \alpha u w_{1}\in \llbracket \varphi _{1} \rrbracket \text { and } (u, \gamma )\in \text {RSub}(v)\}\cdot \{ w_{2}\ |\ w_{2}\in \llbracket \varphi _{2}\rrbracket \}\\ &{} = &{} \varDelta _{v}(\varphi _{1})\cdot \varphi _{2}\\ \end{array} \end{aligned}$$
    • \(\centerdot \) It can be split up into \(\alpha _{1}\in \llbracket \varphi _{1}\rrbracket \) and \(\alpha _{2}uw\in \llbracket \varphi _{2}\rrbracket \).

      $$\begin{aligned} \begin{array}{rcl} \varDelta _{v}(\varphi ) &{} = &{} \{ \alpha _{1}\alpha _{2}\gamma w\ |\ \alpha _{1}\in \llbracket \varphi _{1}\rrbracket ,\ \alpha _{2} u w\in \llbracket \varphi _{2}\rrbracket \text { and } (u, \gamma )\in \text {RSub}(v)\}\\ &{} = &{} \{ \alpha _{1}\ |\ \alpha _{1}\in \llbracket \varphi _{1} \rrbracket \}\cdot \{ \alpha _{2}\gamma w\ |\ \alpha _{2}uw\in \llbracket \varphi _{2}\rrbracket \text { and } (u, \gamma )\in \text {RSub}(v)\}\\ &{} = &{} \text {xt}(\varphi _{1})\cdot \varDelta _{v}(\varphi _{2})\\ \end{array} \end{aligned}$$
    • \(\centerdot \) It can be split up into \(\alpha u_{1}\in \llbracket \varphi _{1}\rrbracket \) and \(u_{2}w\in \llbracket \varphi _{2}\rrbracket \). For this case, it is required by definitions that \(\varphi _{1}\) is a left-reduced expression and \(\varphi _{2}\) is a pure expression. This is the most involved case requiring to split reducing signals.

      $$\begin{aligned} {\begin{array}{rcl} \varDelta _{v}(\varphi ) &{} = &{} \{ \alpha \gamma _{1}\gamma _{2} w\ |\ \alpha u_{1}\in \llbracket \varphi _{1}\rrbracket ,\ u_{2}w\in \llbracket \varphi _{2}\rrbracket \text { and } (u_{1}u_{2}, \gamma _{1}\gamma _{2})\in \text {RSub}(v)\}\\ &{} = &{} \{ \alpha \gamma _{1}\gamma _{2} w\ |\ \alpha u_{1}\in \llbracket \varphi _{1}\rrbracket ,\ u_{2}w\in \llbracket \varphi _{2}\rrbracket ,\ (u_{1}, \gamma _{1})\in \text {RSub}(v),\\ &{} &{}\quad (u_{2}, \gamma _{2})\in \text {RSub}(v) \text { and } (u_{1}, \gamma _{1}) \text { meets } (u_{2}, \gamma _{2})\}\\ &{} = &{} \{ \alpha \gamma _{1}\ |\ \alpha u_{1}\in \llbracket \varphi _{1} \rrbracket \text { and } (u_{1}, \gamma _{1})\in \text {RSub}(v)\}\cdot \\ &{} &{} \quad \{\gamma _{2}w\ |\ u_{2}w\in \llbracket \varphi _{2} \rrbracket \text { and } (u_{2}, \gamma _{2})\in \text {RSub}(v)\}\\ &{} = &{} \text {xt}(\varDelta _{v}(\varphi _{1}))\cdot \varDelta _{v}(\varphi _{2})\\ \end{array}} \end{aligned}$$

    Thus \(\varDelta _{v}(\varphi _{1}\cdot \varphi _{2})\) can be found by the disjunction of these three cases. Then, by rearranging the last two cases, we obtain the equality claimed in the theorem.

  • For \(\varphi = \psi ^{*}\): assume without loss of generality \(\epsilon \not \in \psi \). Then

    $$\begin{aligned} \begin{array}{rcl} \varDelta _{v}(\psi ^{*}) &{} = &{} \varDelta _{v}(\epsilon ) \vee \varDelta _{v}(\psi \cdot \psi ^{*})\\ &{} = &{} \varDelta _{v}(\psi )\cdot \psi ^{*}\ \vee \ \text {xt}(\psi )\cdot \varDelta _{v}(\psi ^{*})\ \vee \ \text {xt}(\varDelta _{v}(\psi ))\cdot \varDelta _{v}(\psi ^{*})\\ &{} = &{} \varDelta _{v}(\psi )\cdot \psi ^{*}\ \vee \ \text {xt}(\varDelta _{v}(\psi ))\cdot \varDelta _{v}(\psi ^{*})\\ &{} = &{} [\epsilon \ \vee \ X\ \vee \ X^{2}\ \vee \dots \vee \ X^{\infty }]\cdot \varDelta _{v}(\psi )\cdot \psi ^{*} \text { where } X =\text {xt}(\varDelta _{v}(\psi )) \\ &{} = &{} \text {xt}(\varDelta _{v}(\psi ))^{*}\cdot \varDelta _{v}(\psi )\cdot \psi ^{*} \\ \end{array} \end{aligned}$$
  • Time restriction and Boolean operations follow definitions straightforwardly.

Corollary 1

The derivative \(\varDelta _{v}(\varphi )\) of a left-reduced timed regular expression \(\varphi \) with respect to a constant signal v is a left-reduced timed regular expression.

Proof

Theorem 2 shows that only finite number of regular operations is required to find the derivative and these equations satisfy requirements in Definition 7.

We extend derivatives for arbitrary signals by letting \(\varDelta _{\epsilon }(\varphi ) = \varphi \) and

$$ \varDelta _{v\cdot w}(\varphi ) = \varDelta _{w}(\varDelta _{v}(\varphi )). $$

Lemma 1

The derivative \(\varDelta _{w}(\varphi )\) of a left-reduced timed regular expression \(\varphi \) with respect to a signal \(w = w_{1}\dots w_{n}\) with n segments is equivalent to the left reduction of \(\varphi \) with respect to the set of sub-signals of w beginning in \(dom(w_{1})\) and ending in \(dom(w_{n})\).

$$\begin{aligned} \varDelta _{w}(\varphi ) = \bigcup _{u\in \text {Sub}_{[1:n]}(w)}\delta _{u}(\llbracket \varphi \rrbracket ) \end{aligned}$$

Proof

Using definitions we directly have

$$\begin{aligned} \begin{array}{rcl} \varDelta _{w}(\varphi ) &{} = &{} \varDelta _{w_{n}}(\varDelta _{w_{n-1}}(\dots (\varDelta _{w_{1}}(\varphi ))))\\ &{} = &{} \delta _{\text {Sub}(w_{n})}(\delta _{\text {Sub}(w_{n-1})}(\dots (\delta _{\text {Sub}(w_{1})}(\llbracket \varphi \rrbracket )))\\ &{} = &{} \delta _{\text {Sub}(w_{1})\cdot \text {Sub}(w_{2}) \dots \text {Sub}(w_{n})}(\llbracket \varphi \rrbracket )\\ &{} = &{} \delta _{\text {Sub}_{[1:n]}}(\llbracket \varphi \rrbracket )\\ \end{array} \end{aligned}$$

5 Application to Online Timed Pattern Matching

In this section we solve the problem of online timed pattern matching by applying concepts and results introduced in previous sections. Our online matching procedure assumes the input signal w to be presented incrementally as follows. Let \(w = w_{1}w_{2}\dots w_{n}\) be an n-variability signal and at each step j we read a new segment \(w_{j}: [t_{j}, t_{j}')\mapsto a_{j}\) where \(a_{j}\in \varSigma \). After reading a new segment \(w_{j}\) we may have new matches ending in \(\text {dom}(w_{j})\) in addition to previously found matches. Therefore we define an incremental match set \(\mathcal {M}_{j}(\varphi , w)\) consisting of matches ending in \(\text {dom}(w_{j})\) and we say that \(\mathcal {M}_{j}(\varphi , w)\) is the output of \(j^{th}\) incremental step.

$$\begin{aligned} \mathcal {M}_{j}(\varphi , w) := \{\ \tau (s)\ |\ s\in \llbracket \varphi \rrbracket ,\ s \in Sub _{[i:j]}(w) \text { and } 1\le i\le j \} \end{aligned}$$

We then define the state of the online timed pattern matching procedure at the step j as a left-reduced timed regular expression.

Definition 10

(The State of Online Procedure). Given a pure timed regular expression \(\varphi \) the state of the online timed pattern matching procedure after reading a prefix \(w_{1..j}\) of the input signal is:

$$\begin{aligned} \psi _{j} := \bigvee _{1 \le i \le j} \varDelta _{w_{i..j}}(\varphi ) \end{aligned}$$

Then, starting with \(\psi _{0} = \varphi \), we update the state upon reading \(w_{j+1}\) by letting

$$\begin{aligned} \psi _{j+1} = \varDelta _{w_{j+1}}(\psi _{j}) \vee \varDelta _{w_{j+1}}(\varphi ) \end{aligned}$$

Now we show that the extraction of reduced signals from state \(\psi _{j}\) provides the match set \(\mathcal {M}_{j}(\varphi , w)\). We do not make a distinction here between a reduced signal \(\alpha \) and its time domain \(\tau (\alpha )\) as they stand for the same thing.

Theorem 3

Given a state \(\psi _{j}\) of an online matching procedure for expression \(\varphi \) and a signal w, the incremental match set \(\mathcal {M}_{j}(\varphi , w)\) is found by the extraction of the state:

$$\begin{aligned} \mathcal {M}_{j}(\varphi , w) = \text {xt}(\psi _{j}) \end{aligned}$$

Proof

Following Definition 10 and Lemma 1 we know the state \(\psi _{j}\) represents a reduced language \(\delta _S(\varphi )\) of \(\varphi \) with respect to a set of signals S satisfying \(s\in \text {Sub}(w)\) and \(\tau _{2}(s)\in \text {dom}(w_{j})\). A reduced signal \(\alpha \) in \(\delta _S(\varphi )\) indicates the existence of a signal \(s\in S\) such that \(\tau (s)=\tau (\alpha )\) and \(s\in \llbracket \varphi \rrbracket \), thus s is a match. Then we can find the match set \(\mathcal {M}_{j}\) by extracting all reduced signals from the state \(\psi _{j}\).

Theorem 3 allows us to have a complete procedure for online timed pattern matching for given \(\varphi \) and an input signal \(w = w_{1}\dots w_{n}\) summarized below:

  1. 1.

    Extract \(\varphi \) to see if the empty word is a match.

  2. 2.

    For \(1 \le j \le n\) repeat:

    1. (a)

      Update the state of the matching \(\psi _{j}\) by deriving the previous state \(\psi _{j-1}\) with respect to \(w_{j}\) and adding a new derivation \(\varDelta _{w_{j}}(\varphi )\) to the state for matches starting in segment j.

    2. (b)

      Extract \(\psi _{j}\) to get matches ending in segment j.

Fig. 2.
figure 2

A signal \(w := w_{1}w_{2}w_{3}\) over variables p and q.

Table 2. Timed pattern matching using derivatives for \(w=w_{1}w_{2}w_{3}\) and \(\varphi = \langle p\cdot q \rangle _{I}\). Entries represent the derivative with respect to \(w_{i..j}\). Reduced expressions, indicating matched segments, are shaded with green. (\(I=[4,7]\), \(\varGamma _{1} =\ ^{[0,3]}_{[0,3]} \langle \checkmark \rangle _{[0,3]}\), \(\varGamma _{2} =\ ^{[3,8]}_{[3,8]} \langle \checkmark \rangle _{[0,5]}\) and \(\varGamma _{3} =\ ^{[8,10]}_{[8,10]} \langle \checkmark \rangle _{[0,2]}\)).

We present an example of online pattern matching for the timed regular expression \(\varphi := \langle p\cdot q \rangle _{[4,7]}\) and input signal \(w := w_{1}w_{2}w_{3}\) with \(w_{1}:[0,3)\mapsto \{p\wedge \lnot q\)}, \(w_{2}:[3,8)\mapsto \{p\wedge q\}\) and \(w_{3}:[8,10)\mapsto \{\lnot p\wedge q\}\) over propositional variables p and q shown in Fig. 2. In Table 2 we depict the step-by-step computation of the match set \(\mathcal {M}(\varphi , w)\) after reading the next segment from w. For Step 1 the state \(\psi _{1}\) is equal to the derivative of \(\varphi \) with respect to \(w_{1}\) such that \(\psi _{1} = \langle \varGamma _{1}\cdot q\rangle _{[4,7]} \vee \langle \varGamma _{1}\cdot p\cdot q\rangle _{[4,7]}\) where \(\varGamma _{1} =\ ^{[0,3]}_{[0,3]} \langle \checkmark \rangle _{[0,3]}\). The extraction \(\text {xt}(\psi _{1})\) is empty therefore we do not have any match ending in \(\text {dom}(w_{1}) = [0,2)\). For Step 2 where \(\varGamma _{2} =\ ^{[3,8]}_{[3,8]} \langle \checkmark \rangle _{[0,5]}\) the extraction of the state is equal to \(\text {xt}(\psi _{2}) = \langle \varGamma _{1}\cdot \varGamma _{2}\rangle _{[4,7]} \vee \langle \varGamma _{2}\rangle _{[4,7]} =\ ^{[4,8]}_{[0,3]}\langle \checkmark \rangle _{[4,7]} \vee \ ^{[7,8]}_{[3,4]}\langle \checkmark \rangle _{[4,5]}\). Similarly, for Step 3 where \(\varGamma _{3} =\ ^{[8,10]}_{[8,10]} \langle \checkmark \rangle _{[0,2]}\), the extraction of the state is equal to \(\text {xt}(\psi _{3}) = \langle \varGamma _{1}\cdot \varGamma _{2}\cdot \varGamma _{3}\rangle _{[4,7]} \vee \langle \varGamma _{2}\cdot \varGamma _{3}\rangle _{[4,7]} =\ ^{[8,9]}_{[1,3]}\langle \checkmark \rangle _{[5,7]} \vee \ ^{[8,9]}_{[4,6]}\langle \checkmark \rangle _{[4,5]}\). In Fig. 3 we illustrate corresponding segments \((t,t')\) extracted in Steps 2 and 3 where solid regions show the actual outputs for the corresponding step.

Fig. 3.
figure 3

A graphical representation of online timed pattern matching presented in Table 2 with t and \(t'\) denoting, respectively, the beginning and end of the match.

We implemented our procedure using the functional term rewriting language Pure and C++. Besides derivative and extraction rules we introduced in this paper, our implementation includes some basic algebraic rewrite rules as well as simplification rules for reduced expressions given in Sect. 3. We perform our experiments on a 3.3 GHz machine for a set of test patterns and we depict performance results of the online procedure in comparison with the offline procedure in [17] in Table 3. For typical cases, experiments suggest a linear time performance with respect to the number of segments in the input for both algorithms. Although the online procedure runs slower than the offline procedure, it requires less memory and the memory usage does not depend on the input size as expected.

Table 3. Execution times/Memory usage (in seconds/megabytes)

6 Conclusions

The contribution of the paper is both theoretical and practical. From a theoretical standpoint we have tackled the difficult problem of exporting the concept of derivatives from discrete to timed behaviors, languages and expressions. To this end we introduced a new approach to handle signals in absolute time, yielding a new type of monoid with interesting properties that by itself is worth investigating in the future. We have shown that such derivatives can be computed syntactically using left-reduced timed regular expressions and that all the matches of the expressions in the signal can be extracted from this representation.

We have used these results to implement a novel procedure for online pattern matching for timed behavior that can be used to monitor systems in real time and detect occurrences of complex patterns. Our procedure consumes a constant segment from the input signal and reports a set of matches ending in that segment before processing the next segment. The algorithm can be applied, of course, to the discrete case where words are viewed as signals that can change their values only at integer times. Despite the overhead, our algorithm might be advantageous for words that have long periods of stuttering if a delay in the detection of matching can be tolerated.

We believe that this procedure has a lot of potential applications in detecting temporal patterns at different time scales. It can be used, for example to detect patterns in music as in [8], in cardiac behavior or in speech. To this end the expression should be extended with predicates over real numbers [7] as in the passage from MTL to STL (signal temporal logic) [11]. Other potential application domains could be the detection of congestions in traffic or in communication network and the analysis of execution logs of organizations information systems or web servers, for example to detect internet robots or customers who are about to abandon our web site.