1 Introduction

The Model-Driven Engineering (MDE) approach to software development focuses on managing the complexities arising in every stage of the software development cycle [22]. In MDE, models constitute abstractions of complex issues originating in the problem-domain which are systematically transformed into implementation-domain artifacts via computer-based technologies [43]. Modern software systems are intricate and operate in highly dynamic environments for which few assumptions can be made at design-time. This setting has sparked an interest in sophisticated approaches to system monitoring and adaptation at runtime, i.e., during the system execution, which aim to mitigate uncertainty. At the same time, these approaches have highlighted the complexities of these activities. Owing to the effectiveness of models at design-time, numerous initiatives surfaced [103] which use models which capture a snapshot of the system constituents as well as their state, i.e., runtime models [16, 21], to realize monitoring [24, 28] and adaptation [41, 47, 107] solutions.

Thus far the primary concern of these solutions has been the attainment and handling of runtime models which represent the most recent snapshot; the evolution of the model, i.e., its history, has been generally neglected [16, 17] although its key role in enabling more informed decision-making during the system lifetime [38, 45] and postmortem analysis [14] has been long recognized. Only recently did solutions surface which exploit this potential, e.g., via detection of recurrent behavior patterns and behavior explanation [44], analysis of temporal requirements [84], or inference of probabilistic adaptations based on past interactions [40].

Representing and utilizing history pose significant challenges to solutions which rely on runtime models: representing history requires the representation of a collection of snapshots—rather than the most recent one—through which changes to the model can be tracked; utilizing history relies on the ability to query history in an effective and intuitive manner, i.e., on the ordering as well as the (real) time in which changes in the structure of a snapshot occurred. The challenges are only exacerbated in case the model is to be queried online, i.e., while the system is running, and the monitoring or adaptation solution depends on the query answer for taking a remedial action or planning an adaptation.

Based on these challenges, we elicit the following requirements for solutions which handle the history of a runtime model: R1—an encoding of history which represents multiple snapshots as well as enables tracing the occurrence and timing of changes in-between snapshots; R2—a query language which supports statements on the model structure as well as the ordering and (quantitative) timing in which structural changes occur; R3fast query execution, as for monitoring or adaptation scenarios query answers might be necessary for decision-making at runtime; R4—a memory-efficient representation which may save resources and expedite query executions.

Fig. 1
figure 1

Overview of system and InTempo interaction

Following the common practice of representing a runtime model as a typed attributed graph, we present a query language and a querying scheme which together fulfill the requirements above. An abstract overview of the system interaction with the scheme, named InTempo, is shown in Fig. 1. The language incorporates a temporal logic defined on graphs, thus enabling the formulation of temporal graph queries, i.e., graph-based model queries with ordering and timing constraints on the occurrence of graph structures (R2). The querying scheme, i.e., a collection of inter-dependent operations, iteratively processes timed events, i.e., changes to the system which represent its history, and makes the corresponding modifications to a model-based history encoding \({\mathscr {R}}\) (R1). After each modification, InTempo executes input queries on \({\mathscr {R}}\) and returns the answers to the system.

Regarding fast query executions (R3), we present an operationalization framework which automatically maps a temporal graph logic formula to a network of simple graph sub-queries therefore enabling incremental execution of temporal graph queries. For memory-efficiency (R4), we present an optional method which, based on the timing constraints of the formula, derives a window during which model elements are relevant to query executions. Elements outside the window can be pruned from the model thus reducing its size while overall returning the same results as un-pruned models.

To demonstrate the effectiveness of InTempo we present an implementation based on the Eclipse Modeling Framework [36, 101]. The implementation is integrated with a feedback control loop which instruments system adaptation, i.e., an adaptation engine [66], and evaluated via simulations based on a case-study of a smart medical system, a real medical guideline, and a combination of real and synthetic event logs. The logs are used for simulations in which query answers from InTempo are used while planning adaptations. The case-study is of particular relevance as in healthcare, real-time requirements are key to medical procedures [26] and therefore fast query executions are necessary. The performance of the implementation, i.e., its fulfillment of R3 and R4, is compared to a relevant tool for runtime monitoring as well as a tool from the MDE community. Moreover, we use data generated by the LDBC Social Network Benchmark [75] to test the querying performance of the implementation against larger and more complex graph structures.

This article is an extension of a paper published in MoDELS ‘20 [94]. Besides improvements on the presentation, structure, and technical explanations, the following contributions are novel with respect to its precursor: (i) the provision of formal arguments on the correctness of answers returned by InTempo (ii) the expansion of the concept of discarding history, i.e., definition of projected answer over partial history representations and support for multiple queries (iii) the generalization of the usage of temporal graph queries for monitoring temporal properties against the history of a system—which was treated ad hoc in [94] (iv) an additional comparison to a tool from the MDE community capable of querying history (v) a newly introduced evaluation based on an independent benchmark (vi) an extension of the discussion of related work.

The rest of the paper is organized as follows. Section 2 discusses the case-study and the foundations of InTempo. A technical overview of the scheme is presented in Sect. 3. The introduced query language for temporal graph queries is presented in Sect. 4, while Sect. 5 presents the operationalization framework which enables incremental query execution. Section 6 presents a method to discard elements that are not relevant to query executions. Section 7 contains the extensions made to InTempo to be suitable for intuitive monitoring of future temporal properties, and Sect. 8 presents the application of the scheme in a self-adaptation scenario. We evaluate the performance of our implementation in Sect. 9, discuss related work in Sect. 10, and conclude the paper as well as discuss future work in Sect. 11. Appendix contains technical preliminaries, proofs of formal arguments, and supplemental information on the evaluation.

2 Prerequisites

This section introduces the case-study from the medical domain, which we use as a running example in the remainder. Moreover, this section presents the foundations of InTempo.

Fig. 2
figure 2

Metamodel of SHS (excerpt)

2.1 Smart healthcare system

The case-study is based on a service-based simulated Smart Healthcare System (SHS). The SHS is based on smart medical environments [92] where sensors periodically collect physiological measurements of patients, i.e., data such as temperature, heart-beat, and blood pressure, and certain medical procedures are automated and performed by devices, such as a smart pump administering medicine, based on the collected patient measurements—as otherwise a clinician would be doing. Figure 2 depicts the metamodel of the SHS (based on the well-known Ecore syntax [36]) which defines valid model instances [19]. The SHS metamodel is based on the exemplar of a service-based medical system in [109] and captures the running system as an instance of the Architecture class.

In the SHS, each patient is connected to a sensor. Services are invoked by a main service called SHSService to collect measurements from sensors, i.e., PMonitoringService, or take medical actions via smart medical devices such as a pump, i.e., DrugService. Invocations are triggered by effectors (Effector) and invocation results are tracked via monitoring probes (Probe) that are attached to Services. Probes are generated periodically or upon events in the real world. Each Probe has a status attribute whose value depends on the type of Service. Each Service has a pID attribute which identifies the patient for whom the Service is invoked. Elements in gray are explained later in the article.

2.2 History and time domain

We identify the system behavior with a (possibly infinite) sequence of instantaneous timed events which represent observable actions or state changes made by the system or its context at some time point. The system has a clock whose time domain is the set of non-negative real numbers \({\mathbb {R}}_{0}^{+}\). An element of the time domain is called a time point.

Fig. 3
figure 3

Rudimentary example of an RTM

Intuitively, the history of a system with respect to a timed event is the sequence of all observed timed events up to and including said timed event. Technically, the history corresponds to a finite prefix of the behavior consisting of pairs \((e^i,\tau )\) with \(e^i\) a timed event from a set of possible observations of interest \({\mathscr {E}}\), \(i\in {\mathbb {N}}\) the position of the event in the index, and \(\tau \in {\mathbb {T}}\) \(\mathbb {R}_{0}^{+}\) the time point of occurrence. We use the shorthand \(e_{\tau }\) when the index position is irrelevant and the shorthand \(\tau _i\) to denote the time point at position i. Note that, for presentation purposes, we group all changes with the same time point in one event. However, we require that time in the history eventually diverges, i.e., ruling out Zeno behaviors and ensuring that no event groups an infinite amount of changes.Footnote 1

For example, for three events \(e_2,e_4,e_5\), the encompassing history at time point 5 is denoted \({\bar{h}}_5:=e_{2}e_{4}e_{5}\). When a new event \(e_7\) occurs, the history is incremented by a concatenation to reflect the new history at time point 7, i.e., \({\bar{h}}_7={\bar{h}}_5\cdot e_7\).

Fig. 4
figure 4

History \({\bar{h}}^G_7\) comprising RTMs

2.3 Runtime models and history

A (structural) Runtime Model (RTM) is a snapshot of the constituents of the modeled system and their state [15, 21]. See Fig. 3 for an example of a rudimentary RTM (capturing a fragment of the Architecture instance) based on the metamodel in Fig. 2. RTMs are causally connected to the modeled system: if the RTM is modified, the system mirrors the modification and vice versa. The causal connection between the system and the RTM may be utilized to alter the system via model transformation [49] where model queries search for parts of the model that are to be altered via in-place transformations which correspond to desired alterations to the system. By virtue of the causal connection, the alterations are enacted on the modeled system.

A history can be represented by a sequence of RTMs. In this representation, each member is associated with the time point of an event \(e_\tau \) and mirrors the changes corresponding to \(e_\tau \) in the system. For instance, represented by RTMs, \({\bar{h}}_5\) corresponds to the sequence \({\bar{h}}^G_5 :=G_2G_4G_5\) (see Fig. 4); each RTM is yielded by an event, is associated with the time point of the yielding event, and extends its predecessor by the changes corresponding to the event. The RTM \(G_2\) extends the empty model \(\emptyset \) which is at the start of every such sequence. For spawning an RTM based on an event, we assume that there exists a mapping from the set of events \({\mathscr {E}}\) to corresponding model modifications. In this case, according to the mapping, event \(e_7\) corresponds to the association of s with the newly added pm \(_{2}\) and the disassociation of s from the deleted d \(_{1}\)—when the event occurs, owing to causal connection, it is mirrored in the RTM. To include the latest change, the history representation \({\bar{h}}^G_5\) is extended by the RTM \(G_7\), i.e., \({\bar{h}}^G_7 = {\bar{h}}^G_5 \cdot G_7\)—illustrated in Fig. 4.

Fig. 5
figure 5

RTM\(^{\mathrm{H}}\) instances \(H_{[5]}\) and \(H_{[7]}\)

A Runtime Model with History (RTM\(^{\mathrm{H}}\)) [95] is an enhanced RTM that simultaneously provides two views on the modeled system: a view of the current system state which corresponds to a conventional causally connected RTM; and a compact view of the history. The view on history is afforded by each entity being equipped with a creation timestamp and a deletion timestamp, abbreviated cts and dts, respectively. For an example, see Fig. 2 where all entities inherit from the MonitorableEntity.

The cts and dts capture the time points of creation and deletion of an entity, respectively, by an event. Upon the occurrence of an event, similarly to an event yielding a new system state, the corresponding entity creation (deletion) in the RTM\(^{\mathrm{H}}\) yields a new instance of the RTM\(^{\mathrm{H}}\) where the cts (dts) of the modified entity is set based on the time point of the event. When an entity is created, its dts is set to \(\infty \). When an entity is deleted in the modeled system, the respective entity is not deleted in the RTM\(^{\mathrm{H}}\). Rather, its dts is updated to the time point of the event that induced the deletion.

We assume that connectors in an RTM\(^{\mathrm{H}}\) exist for as long as both their end-points exist. Moreover, attribute values in an RTM\(^{\mathrm{H}}\) are set when the entities are created and, once set, remain unchanged. If changes of attribute values (or connectors) in the modeled system are of interest, an RTM\(^{\mathrm{H}}\) can track such changes by appropriate modeling decisions, e.g., by encoding these elements as entities in the metamodel—as shown in [70]. Each value change would then lead to a creation of a new entity in the RTM\(^{\mathrm{H}}\), where the duration of the value would be captured by a cts and a dts. We demonstrate such an encoding in the evaluation in Sect. 9.4.

By retaining all entities as well as information on their creation and deletion time points, an RTM\(^{\mathrm{H}}\) instance suffices to represent the evolution of entities up to and including the time point of the event that yielded the instance in question. In contrast to a sequence of RTMs, which stores multiple RTMs and all their entities so as to be able to track the evolution of entities across the sequence, an RTM\(^{\mathrm{H}}\) stores only a single instance of each model entity. Hence, it affords a compact and, therefore, more efficient representation of history. Similarly to an RTM, an instance of the RTM\(^{\mathrm{H}}\) is always associated with the time point of the latest event. For example, the representation of \({\bar{h}}_7^G\) by an RTM\(^{\mathrm{H}}\) yields a single model, \(H_{[7]}\), that contains the same information as \({\bar{h}}_7^G\). \(H_{[7]}\) is illustrated in Fig. 5. We note that, in an RTM\(^{\mathrm{H}}\), causal connection only applies to the latest snapshot.

Technically, an RTM\(^{\mathrm{H}}\) is obtained by an iterative coalescence of a new event and an RTM\(^{\mathrm{H}}\) into a new RTM\(^{\mathrm{H}}\) instance. For creations, the new instance contains new entities corresponding to the event and sets the values of their attributes: regular attributes are set according to data in the event, the cts is set based on the time point of the event, and the dts is set to \(\infty \). For deletions, the dts values of the affected entities are set to the time point of the event. We denote this coalescence by \(\odot \). For example, when \({\bar{h}}_5^G\) is incremented by the event \(e_{7}\) which corresponds to the creation of pm \(_{2}\) and the deletion of d \(_{1}\), this event yields a new RTM\(^{\mathrm{H}}\) \(H_{[7]}=H_{[5]} \odot e_{7}\), illustrated in Fig. 5—attributes updated from the previous RTM\(^{\mathrm{H}}\) are accentuated.

2.4 Graph-based RTMs, transformation, and queries

An RTM is often encoded as a typed, attributed graph [37] where entities are modeled as vertices, connectors between entities as edges, and information about entities as vertex attributes [106]. A typed, attributed graph (henceforth, simply referred to as a graph) is typed over a type graph which defines types of vertices, edges, and attributes—similarly to the relationship between a metamodel and a model. In this context, the metamodel in Fig. 2 may be seen as an informal representation of the type graph of the SHS. The RTM\(^{\mathrm{H}}\) is analogously based on graphs with history [100].

Attributes are associated with a data type, i.e., a character string, an integer, a real number, or a Boolean. Graphs contain a set of assignments A which assign data-type-compatible values to attributes, e.g., pm \(_{1}.\)pID \(=1\) in Fig. 3. Given that attribute values in an RTM\(^{\mathrm{H}}\) are set upon the creation of an entity and remain unchanged (see Sect. 2.3), assignments in A are fixed. Formally, the set of assignments A may have various representations, e.g., distinguished data vertices [37] or an attribute constraint over sorted variables [99].

Encoding an RTM as a graph allows for the realization of model transformation via established formalisms, such as typed, attributed graph transformation [37] where graph transformation rules are used to search for a part of the model which is transformed in place [49].

In short, let G be a graph (in this context, encoding an RTM) typed over a type graph T and \(\rho \) a graph transformation rule. The rule \(\rho \) is characterized by a left-hand side (LHS) and a right-hand side (RHS) graph, also typed over T, which define the pre-condition and postcondition of an application of \(\rho \), respectively. Intuitively, the execution of \(\rho \) searches for LHS in G and transforms it according to RHS. Moreover, the LHS graph may be extended with a Boolean expression \(\gamma \) (additionally to the set of assignments A) over the values of attributes in A. We refer to this LHS graph extended by \(\gamma \) as a (graph) pattern and to G as a host graph.

The LHS of a rule characterizes a (graph) query, which is the equivalent graph-based notion of a model query. The execution of a graph query over a given host graph, also called (graph) pattern matching, amounts to finding matches, i.e., occurrences of the query pattern in the host graph, whose attribute values satisfy the assignments and attribute constraint in A and \(\gamma \) of the pattern, respectively.

Formally, a match is a mapping from the pattern in the query to the host graph which preserves structure and type—also called a morphism. In the following, we use the two terms interchangeably. The query answer set is a set strictly containing all matches for a query in G. The transformation of the rule, specified in the RHS, is performed only when a match for the query, i.e., the LHS, has been found. The match identifies a part of G where the transformation should occur.

Queries with Complex Patterns In certain cases, simple patterns are not sufficient as a language for defining more complex application conditions of rules, for instance if the existence of certain model elements should be prohibited. In those cases, an LHS , i.e., graph query, is enhanced with an application condition \(ac\) which every match m should satisfy. In the following, a query \(\theta \) is characterized by a pattern n and an application condition \(ac\), and denoted \(\theta :=(n,ac)\).

The language of Nested Graph Conditions (NGCs) [55] can formulate \(ac\) that are as expressive as first-order logic on graphs [27] as shown in [55, 88] and constitute, as such, a natural formal foundation for pattern-based queries. NGCs support standard first-order logic operators. The syntax of an NGC \(\phi \) is given by the grammar:

$$\begin{aligned} \phi {:}{:}={true } \;\; | \;\; \lnot \phi \;\; | \;\; \phi \wedge \phi \;\; | \;\; \exists (n\hookrightarrow {\hat{n}}, \phi ) \end{aligned}$$

where \(n,{\hat{n}}\) are patterns. The existential quantifier features a morphism (denoted by a hooked arrow) from n to \({\hat{n}}\) which relates, i.e., binds, elements in outer conditions (n) to inner (nested) conditions (\({\hat{n}}\)) and is therefore also called a binding.

Let \({\mathscr {L}}\) be the language for queries with \(ac\) based on NGC and \(\theta :=(n,\phi )\) with \(\theta \in {\mathscr {L}}\), i.e., \(\phi \) is an NGC. The answer set \(\mathcal {A} \) for \(\theta \) over a host graph G contains all matches m in G for the pattern n that satisfy \(\phi \). We also denote \(\mathcal {A}\) by \(\mathcal {A} (G)\) when \(\theta \) is clear from the context. Intuitively, the existential quantifier in a query \((n,\, \exists (n\hookrightarrow {\hat{n}},{\hat{\phi }}))\) is satisfied for a match m for n when (i) there is a match \({\hat{m}}\) for \({\hat{n}}\) in G such that \({\hat{m}}\) satisfies \({\hat{\phi }}\) (ii) \({\hat{m}}\) is compatible with m, i.e., respects the binding between the two patterns captured in \(n\hookrightarrow {\hat{n}}\). The operator \({true } \) is always satisfied. The intuition behind negation and conjunction is similar to that in first-order logic. In the remainder, we abbreviate \(\lnot (\lnot \phi \wedge \lnot {\hat{\phi }})\) by \(\phi \vee {\hat{\phi }}\), \(\exists (n \hookrightarrow {\hat{n}}, \phi )\) by \(\exists ({\hat{n}}, \phi )\), and \(\exists (n, {true })\) by \(\exists \, n\).

Fig. 6
figure 6

Statements on SHS as patterns—braces contain Boolean expressions over the values of attributes of the pattern; vertices with the same label refer to the same object in the host graph

As an example, assume the following hypothetical requirement which draws from operation sequence compliance, i.e., the order of service invocations, in an SHS  [see [109]: “When a sensor service is invoked for a patient by the main service, there exists no other sensor service for the same patient. Moreover, a drug service should be invoked for the same patient.” Based on the SHS metamodel in Fig. 2, the main service is represented by SHSService, the sensor service by PMonitoringService, and the drug service by DrugService. Then, the described situations, i.e., sensor service invoked by a main service, may be captured by the patterns \(n_1,n_{1.1},\) and \(n_{1.2}\), illustrated in Fig. 6, where the attribute constraints in \(n_{1.1}\) and \(n_{1.2}\) (illustrated between braces) ensure that the situations concern the same patient.

Formulated as a query in \(\mathscr {L}\), the requirement is translated into: “find all matches of pattern \(n_1\) in G that satisfy \(\phi _1\), i.e., where a match for \(n_{1.1}\) does not exist while a match for \(n_{1.2}\) does.” In \(\mathscr {L}\), this query is captured by \(\theta _1:=(n_1,\phi _1)\) with \(\phi _1\) an NGC defined as \(\lnot \exists \, n_{1.1} \wedge \exists \, n_{1.2}\). Nesting implies that the vertices s and pm from \(n_{1}\) are bound in inner patterns \(n_{1.1}\) and \(n_{1.2}\), i.e., all patterns refer to the same s and pm in G. In our illustrations this is encoded by the usage of the same label for bound elements. The \(\mathcal {A} (G_5)\)—from Fig. 4—for \(\theta _1\) consists of one match for \(n_1\) which satisfies \(\phi _1\), i.e., a pm is found which is connected to a DrugService with the same pID and not connected to any other sensor services.

Query Operationalization A graph query is a declarative means to express a structure of interest which should satisfy a given condition. The query itself does not specify instructions on how to execute the query, i.e., its operationalization. For the operationalization of queries, we build on a formal framework we have previously presented which supports queries in \({\mathscr {L}}\). The framework in question decomposes a query with an arbitrarily complex NGC as \(ac\) into a suitable ordering of simple, pattern-based sub-queries called a Generalized Discrimination Network (GDN) [18].

A GDN is a directed acyclic graph where each graph node represents a (sub-)query. To avoid confusion, we refer to the GDN as a network. Dependencies between queries are represented by edges from child nodes, i.e., the nodes whose results are required, to the parent node, i.e., the node which requires the results. Dependencies can either be positive, i.e., the query realized by the parent node requires the presence of matches of the child node, or negative, i.e., the query of the parent node forbids the presence of such matches. The overall query is executed bottom-up: the execution starts with leaves and proceeds upward in the network. The terminal node computes the \(\mathcal {A}\) of the query.

In [18], a GDN is realized as a set of graph transformation rules where each GDN node, i.e., each (sub-)query, is associated with one transformation rule. The LHS of the rule searches for matches of the corresponding query in a given host graph G. The RHS of the rule creates a marking node in G that marks each match and marking edges from the marking node to each node of the match—marking nodes are not to be confused with regular graph nodes in G (which, in this context, represent entities of the modeled system); thus, we use the term vertex for regular graph nodes. In order to be able to create marking nodes and edges, the transformation rules of a GDN, henceforth called marking rules, are typed over an extended type graph which adds the required types for marking nodes and edges to the initial type graph. The LHS of rules with dependencies have \(ac\) that require the existence of marking nodes of their positive dependencies and forbid the existence of marking nodes of their negative dependencies.

Fig. 7
figure 7

GDN and marking rules for \(\theta _1\)—based on [18]

The GDN for \(\theta _1\) from earlier is shown in Fig. 7, where each square represents a GDN node. Each node is associated with a marking rule. The GDN consists of three nodes, i.e., rules: the node \(N_{1.1}\) searching for the pattern \(n_{1.1}\), the node \(N_{1.2}\) searching for \(n_{1.2}\), and the topmost node \(N_{1}\) searching for \(n_1\). Node \(N_{1}\) computes its matches by matching its pattern and checking whether both of its dependencies are satisfied (the conjunction in \(\phi _1\)). The negative dependency which captures the negation in \(\phi _1\) (drawn with a dashed line) is satisfied when a match for \(N_{1}\) cannot be extended by a match for \(N_{1.1}\). All nodes are realized by marking rules whose LHS matches a pattern and whose RHS creates marking nodes and edges that mark the matches of the LHS. The rules for nodes \(N_{1.1}\), \(N_{1.2}\) are shown in Fig. 7 (within rectangles), where (i) marking nodes are illustrated by circles and (ii) the marking nodes and edges added by a rule are dashed and annotated with “++”. For presentation purposes, the illustrations of rules thus contain both their LHS and RHS.

Incremental Execution Optimization techniques such as local search can be employed to reduce the pattern matching effort of GDN nodes [see [3]. In local search, pattern matching initiates from a single element and builds a match candidate iteratively following a heuristics-based search plan.

Owing to the decomposition of the query into simpler marking rules as well as local search, a GDN is amenable to incremental execution. Changes in G are propagated through the network, whose nodes only recompute their results if the change concerns them or one of their dependencies. If a re-computation is deemed necessary, owing to local search, a node is capable of updating its matches starting from changed elements instead of starting over. Therefore, we say that the query is also executed incrementally as its \(\mathcal {A}\) is updated by each GDN execution.

2.5 Metric temporal graph logic

Metric Temporal Graph Logic (MTGL) [50] enables the formulation and checking of temporal requirements on patterns, i.e., requirements on the evolution of patterns over time.

MTGL builds on NGCs and Metric Temporal Logic [69] to enable the definition of Metric Temporal Graph Conditions (MTGCs) on patterns. Additionally to the NGC operators, MTGCs support metric, i.e., interval-based, temporal operators: the until (\(\mathrm {U}_I\), with I an interval in \({\mathbb {R}}_{0}^{+}\)) and its dual since (\(S_I\)). The syntax of an MTGC \(\psi \) is given by:

$$\begin{aligned} \psi {:}{:}={true } \; | \; \lnot \psi \; | \; \psi \wedge \psi \; | \; \exists (n, \psi ) \; | \; \psi \, \mathrm {U}_I \psi \; | \; \psi \, \mathrm {S}_I\psi \end{aligned}$$

with n a pattern. The operators eventually (\(\lozenge _I\)) and once \((\blacklozenge _I)\) are abbreviations of until and since: \(\lozenge _I\,\psi = {true } \, \mathrm {U}_I \,\psi \) and \(\blacklozenge _I\,\psi ={true } \,\mathrm {S}_I\,\psi \). We abbreviate exists similarly to NGCs.

MTGL reasons over sequences of graphs. Intuitively, this is motivated by the logic expressing requirements on the evolution of a pattern over time, i.e., over consecutive graphs. However MTGCs can also be equivalently checked over a graph with history [50], which here corresponds to an RTM\(^{\mathrm{H}}\). Following is the intuition behind satisfaction for the MTGL operators \(\exists , \mathrm {U}_I, \mathrm {S}_I\).

Let \(\psi \) be the MTGC \(\exists (n,{\hat{\psi }})\). A match m for n in an RTM\(^{\mathrm{H}}\) \(H_{[\tau ]}\) satisfies \(\psi \) at time point \(\tau \in {\mathbb {T}}\) if \(\max _{\epsilon \in E}{\epsilon .cts } \le \tau < \min _{\epsilon \in E}{\epsilon .dts }\), with E the elements of m, and matches for \({\hat{\psi }}\) are compatible with m and satisfy \({\hat{\psi }}\). Given two MTGCs \(\psi , {\hat{\psi }}\), the MTGC \(\psi \, \mathrm {U}_I \, {\hat{\psi }}\) is satisfied at a time point \(\tau \) when there is a time point \(\tau '\) with \(\tau ' - \tau \in I\), where \({\hat{\psi }}\) is satisfied, and at least for all \(\tau '' \in [\tau , \tau ')\), \(\psi \) is satisfied. The intuition is reversed for \(\psi \, \mathrm {S}_I \,{\hat{\psi }}\) which is satisfied at a time point \(\tau \), when there is a \(\tau '\) with \(\tau - \tau ' \in I\), where \({\hat{\psi }}\) is satisfied, and at least for all \(\tau '' \in (\tau ', \tau ]\), \(\psi \) is satisfied.

3 The InTempo scheme

An overview of the functionality of InTempo is provided in Sect. 1 and illustrated in Fig. 1. Building on the technical prerequisites in Sect. 2, this section presents InTempo in further technical detail—see Fig. 8 for a graphical reference—and shows how the requirements for a history encoding (R1), a query language (R2), fast answers (R3), and memory-efficiency (R4) defined in Sect. 1 are fulfilled.

Design-time InTempo assumes the following artifacts have been made available at design-time (i) a metamodel of the system with creation and deletion timestamps for each vertex (ii) a mapping of events to model modifications, e.g., additions or deletions of nodes.

Runtime To represent the history of a system, i.e., for R1, InTempo relies on an RTM\(^{\mathrm{H}}\) (see Sect. 2.3) which, by featuring creation and deletion timestamps, captures temporal information on multiple past versions of an RTM into a single, consolidated representation.

Fig. 8
figure 8

Overview of InTempo Operations

For R2, we introduce a novel language (see Sect. 4) which allows for the specification of temporal graph queries, i.e., queries on the ordering and timing in which patterns are added or deleted in the RTM\(^{\mathrm{H}}\). Compared to the queries with NGCs presented in Sect. 2.4, temporal graph queries use MTGCs (see Sect. 2.5) for the formulation of \(ac\) , i.e., they incorporate (past and future) temporal operators with timing constraints, and, moreover, compute the interval for which matches for a temporal graph query satisfy the query’s \(ac\).

InTempo is invoked by the system and provided with a sequence of changes encapsulated in (timed) events. The operation of InTempo at runtime is reactive. First, upon the occurrence of an event, the system invokes InTempo. Then, InTempo consults the event mapping, makes the corresponding modifications to the RTM\(^{\mathrm{H}}\), and sets the values of the creation and deletion timestamps based on the time point of the event. Finally, the scheme executes the input queries for each modification and returns the answers to the system.

For systems with lengthy histories or a high rate of incoming events, the re-computation of matches from scratch upon every event would quickly lead to slow query executions, arguably rendering a runtime solution impractical. Instead, for the execution of temporal graph queries over an RTM\(^{\mathrm{H}}\), InTempo uses a novel operationalization framework which supports incremental execution of temporal graph queries (Sect. 5). This feature allows for fast query executions (R3) and also gives InTempo its name: In cremental execution of Tempo ral graph queries. Aiming for fast executions and memory-efficiency, InTempo supports an optional history representation which contains only elements that are relevant to query executions (Sect. 6). Queries over this constrained RTM\(^{\mathrm{H}}\) may yield faster executions while affording increased memory-efficiency.

Operations InTempo consists of two core operations: Operationalization and Execution. Maintenance constitutes an optional extension which, if enabled, is performed after Execution. We outline each operation below.

  • Operationalization: This operation constructs a temporal GDN for each of the input temporal graph queries. The operation extends the GDN construction presented in Sect. 2.4 by introducing concepts for handling structural matches whose validity is based on the cts and dts of their elements as well as for evaluating the until and since operators from MTGL. This operation is only performed once per query (at the beginning) and if the set of input queries has changed between invocations of InTempo.

  • Execution: The operation executes the temporal GDN(s) over the RTM\(^{\mathrm{H}}\). A temporal GDN processes the modifications made to the RTM\(^{\mathrm{H}}\) and, for modifications that are relevant to the queries, executes the affected sub-queries. The sub-queries search for matches of patterns while taking into account timing constraints on the occurrence of patterns. By virtue of the incremental execution of the GDN, matches are updated incrementally after each event.

  • Maintenance: The operation relies on the computation of a time window (during Operationalization) which is based on timing constraints of the temporal operators in input queries. The operation uses the window to decide when deleted elements are not going to be involved in future query executions and can be thus pruned from the RTM\(^{\mathrm{H}}\).

4 Language for temporal graph queries

Temporal requirements are always present in medical guidelines and their satisfaction is key to the successful completion of procedures [26]. Adding a temporal dimension to the exemplary requirement from Sect. 2.4 makes it similar to compliance checking of medical procedures which may track time between triage and admission [78], here represented by the invocation of a sensor service (\(n_1\)) and a drug service (\(n_{1.2}\)), respectively: “When a sensor is invoked for a patient, there should be a drug service invoked for the same patient within one minute and, until then, there should be no other sensor service invoked for the same patient.” The specific timing constraint is adjusted for the purpose of presentation.

Formulated in a query, this instruction includes temporal requirements on the evolution of graph structures: “find all matches for \(n_1\) such that, for a match for \(n_1\) at a time point \(\tau \), at least one match for \(n_{1.2}\) is found at some time point \(\tau ' \in [\tau , \tau + 60]\), i.e., at most 60 time units later; in addition, at each time point \(\tau '' \in [\tau , \tau ')\) in between, no match for \(n_{1.1}\) is present,” where all n patterns refer to the same pm and by time unit we refer to the unit of measurement with which the system tracks time—here, assumed to be a second.

The language \({\mathscr {L}}\) introduced in Sect. 2.4, which employs NGCs for the definition of \(ac\), does not inherently support temporal requirements on patterns. MTGCs, introduced in Sect. 2.5, build on NGCs and allow for the specification of a desired ordering and a timing constraint on the evolution of graph structures—thereby supporting temporal requirements. Moreover, in an encoding where elements have lifespans, a structural occurrence of a pattern consisting of such elements has to be accompanied by information on when and for how long a match exists. This requirement is emphasized for application scenarios where a decision may depend on timing, e.g., in runtime adaptation.

Temporal logics that reason over intervals, such as MTGL, are capable of defining the truth value of a formula for every time point in the time domain. Building on this capability, we introduce the query language \({\mathscr {L}}_\mathrm {T}\) which enables the formulation of temporal graph queries over an RTM\(^{\mathrm{H}}\). A temporal graph query \(\zeta \in \mathscr {L}_\mathrm {T} \) is characterized similarly to graph queries with NGCs in \(\mathscr {L}\) , i.e., \(\zeta :=(n, ac)\). However, in contrast to \(\mathscr {L}\), temporal graph queries feature \(ac\) based on MTGCs thereby supporting temporal requirements. In \({\mathscr {L}}_\mathrm {T}\), the exemplary query above is captured by \(\zeta _1 :=(n_1, \psi _1)\) where \(\psi _1\) is an MTGC defined as \(\lnot \exists \,n_{1.1} \,\mathrm {U}_{[0,60]}\, \exists \, n_{1.2}\). Recall that elements common to \(n_{1}\) and the patterns \(n_{1.1}, n_{1.2}\) are bound—see Sect. 2.4.

Compared to \(\mathcal {A}\), an answer set \(\mathcal {T}\) for a query \(\zeta \in \mathscr {L}_\mathrm {T} \) is extended with a temporal dimension: matches in \(\mathcal {T}\) are paired with a temporal validity, i.e., the set of time points for which (i) matched elements co-exist in the RTM\(^{\mathrm{H}}\) and satisfy the attribute constraint, called lifespan of the match (ii) the match satisfies the \(ac\) of \(\zeta \), called satisfaction span. We elaborate on the lifespan, the satisfaction span, and the temporal validity below.

4.1 Lifespan of a match

Vertices of an RTM\(^{\mathrm{H}}\) have attributes which capture their creation (cts) and deletion (dts) timestamps. For an element \(\epsilon \), we define its lifespan as the non-empty non-negative interval \([\epsilon .cts , \epsilon .dts )\). The intuition behind the lifespan of an element being right-openFootnote 2 is that if an element has been deleted at a time point \(\tau \) this means the element has existed until a time point that approaches but is not equal to \(\tau \). A match is valid only if there is a non-empty interval \(\lambda ^{m}\), called the lifespan of the match, during which the lifespans of all matched elements E overlap:

$$\begin{aligned} \lambda ^{m}=\bigcap \limits _{\epsilon \in E} [\epsilon .cts , \epsilon .dts ) \end{aligned}$$
(1)

Attribute values of matched elements do not change (see Sect. 2.4) and, hence, cannot affect the lifespan computation. Element timestamps are always assigned from \({\mathbb {R}}_{0}^{+}\), therefore it always holds that \(\lambda ^{m} \subseteq {\mathbb {R}}_{0}^{+}\). In the special case where the pattern in \(\zeta \) is the empty graph \(\emptyset \), an (empty) match m is always found with \(\lambda ^m={\mathbb {R}}_{0}^{+}\).

4.2 Satisfaction span and temporal validity

We call the set of time points for which an MTGC is satisfied its satisfaction span, denoted by \({\mathcal {Y}}\). In the context of a query \((n, \psi ) \in \mathscr {L}_\mathrm {T} \) or a nested condition with n as enclosing pattern, a satisfaction span related to a match m for n is defined as \({\mathcal {Y}}(m, \psi )=\{ \tau \, | \, \tau \in {\mathbb {R}}\wedge m \text { satisfies }\psi \text { at } \tau \}\). The temporal validity of the match is the set of time points for which m exists and satisfies \(\psi \), i.e., the intersection of the lifespan of a match with the satisfaction span, and is denoted by \({\mathcal {V}}(m, \psi )\).

The intersection of two intervals is always an interval, whereas the union of two intervals may result in disjoint, i.e., disconnected, sets. To encode such unions, we define an interval set \({\mathbb {I}} \subseteq {\mathbb {R}}\) which may contain disjoint or empty intervals. Note that a set operation between an \({\mathbb {I}} \in {\mathcal {F}}\) and \(I \in {\mathcal {I}}\) with \({\mathcal {F}}\) and \({\mathcal {I}}\) the set of all interval sets and intervals, respectively, may result into an \({\mathbb {I}}'\in {\mathcal {F}}\). The satisfaction span \({\mathcal {Y}}\) and the temporal validity \({\mathcal {V}}\) may depend on unions of intervals or operations with other interval sets and are, therefore, interval sets themselves.

The definition below presents the recursively defined satisfaction computation \({\mathcal {Z}}\) of an MTGC. An explanation of the intuition behind the definition follows.

Definition 1

(satisfaction computation \({\mathcal {Z}}\)) Let \(n, {\hat{n}}\) be patterns and \(\psi , \chi , \omega \) be MTGCs. Moveover, let m be a match for n. The satisfaction computation \({\mathcal {Z}}(m, \psi )\) is recursively defined as follows.

$$\begin{aligned}&{\mathcal {Z}}(m, {true }) = {\mathbb {R}} \end{aligned}$$
(2)
$$\begin{aligned}&{\mathcal {Z}}(m, \lnot \chi )= {\mathbb {R}} \setminus {\mathcal {Z}}(m, \chi ) \end{aligned}$$
(3)
$$\begin{aligned}&{\mathcal {Z}}(m, \chi \wedge \omega )= {\mathcal {Z}}(m, \chi ) \cap {\mathcal {Z}}(m, \omega ) \end{aligned}$$
(4)
$$\begin{aligned}&{\mathcal {Z}}(m,\exists ({\hat{n}}, \chi )) = \bigcup \limits _{{\hat{m}} \in {{\hat{M}}}} \lambda ^{{\hat{m}}} \cap {\mathcal {Z}}({\hat{m}}, \chi ) \end{aligned}$$
(5)
$$\begin{aligned}&{\mathcal {Z}}(m,\chi \mathrm {U}_I \omega )= {\left\{ \begin{array}{ll} \bigcup \limits _{i \in {\mathcal {Z}}(m, \omega ),\, j \in J_i} j \cap \big ((j^{+} \cap i) \ominus I\big ) &{} \text {if } 0 \not \in I \\ \bigcup \limits _{i \in {\mathcal {Z}}(m, \omega )} i \, \cup \bigcup \limits _{ j \in J_i} j \cap \big ((j^{+} \cap i) \ominus I\big ) &{} \text {if } 0 \in I \end{array}\right. }\nonumber \\ \end{aligned}$$
(6)
$$\begin{aligned}&{\mathcal {Z}}(m,\chi \mathrm {S}_I \omega )= {\left\{ \begin{array}{ll} \bigcup \limits _{i \in {\mathcal {Z}}(m,\omega ),\, j \in J_i} j \cap \big ((^{+}j \cap i) \oplus I\big ) &{} \text {if } 0 \not \in I \\ \bigcup \limits _{i \in {\mathcal {Z}}(m,\omega )} i \, \cup \bigcup \limits _{ j \in J_i} j \cap \big ((^{+}j \cap i) \oplus I\big ) &{} \text {if } 0 \in I \end{array}\right. }\nonumber \\ \end{aligned}$$
(7)

with:

  • \({\hat{M}}\) a set containing only matches that are compatible with the (enclosing) match m—see Sect. 2.4

  • \(J_i\) the set of all intervals in \({\mathcal {Z}}(m, \chi )\) that are either overlapping with or adjacent to some \(i\in {\mathcal {Z}}(m, \omega )\)

  • \(^{+}k\) the union \(\ell (k) \cup k\), i.e., making k left-closed, and \(k^{+}\) defined symmetrically

  • \(k \oplus l = [\ell (k)+\ell (l),r(k)+r(l)]\), \(k \ominus l = [\ell (k)-r(l),r(k)-\ell (l)]\) for \(k,l\in {\mathcal {I}}\) with \(\ell (k)\) and r(k) the left and right end-point of k, respectively—note that end-points are reversed in subtraction.

The intuition behind the equations for \({true } \), negation, and conjunction is clear. Regarding exists, the satisfaction span can be computed based on a (sub-)query which searches for \({\hat{n}}\)—the computation relies on the temporal validity of all matches \({\hat{m}}\) for \({\hat{n}}\) which are compatible with m.

For until, the computation is conditional on the timing constraint I. If \(0 \not \in I\), i.e., \(\ell (I)\not =0\), the satisfaction includes every time point t in the intersection of some \(i' \in {\mathcal {Z}}(m, \omega )\) with a \(j' \in {\mathcal {Z}}(m, \chi )\) for which a time point \(\tau '\) in \(i'\) occurs within I. Furthermore, \(j'\) needs to overlap \(i'\), e.g., \(j'=[1,3],i'=[2,4]\) or span until the very last time point, i.e., adjacent, before \(i'\), e.g., \(j'=[1,2),i'=[2,4]\). If \(j'\) and \(i'\) are adjacent, during the computation \(j'\) becomes right-closed, i.e., \(j'^{+}=[1,2]\), to ensure that their intersection produces a non-empty set. If \(0 \in I\), then, by the semantics, it may be that \(j'\) is empty, i.e., does not exist, and in that case until is satisfied by every \(i' \in {\mathcal {Z}}(m, \omega )\). Therefore, the computation includes every \(i'\) and remains unchanged otherwise. The intuition behind since is analogous.

The following theorem states that the set of time points in the satisfaction span \({\mathcal {Y}}\) is equal to the set of time points obtained by the satisfaction computation \({\mathcal {Z}}\).

Theorem 1

Given a match m and an MTGC \(\psi \), the satisfaction span \({\mathcal {Y}}\) of m for \(\psi \) is given by the satisfaction computation \({\mathcal {Z}}\) of m for \(\psi \), that is, \({\mathcal {Y}}(m, \psi )={\mathcal {Z}}(m, \psi )\)

Proof

(sketch) By structural induction over \(\psi \). For every equation, inclusion is shown in both directions. See Sect. B.1 for the proof. \(\square \)

Thus \({\mathcal {Z}}\) enables the computation of all time points in \({\mathbb {R}}\) for which a match satisfies an MTGC , i.e., \({\mathcal {Y}}\). We now present a technical definition of the temporal validity \({\mathcal {V}}\) of a match.

Definition 2

(temporal validity \({\mathcal {V}}\)) Let n be a pattern, \(\psi \) an MTGC, and \(H_{[\tau ]} \) an RTM\(^{\mathrm{H}}\). Moreover, let \((n, \psi )\) be a query for a pattern n which satisfies \(\psi \). Then, for a match m for n in \(H_{[\tau ]}\) with lifespan \(\lambda ^m\), its temporal validity \({\mathcal {V}}\), denoted as \({\mathcal {V}}(m, \psi )\), is an interval set defined as \({\mathcal {V}}(m, \psi ) :=\lambda ^m \cap {\mathcal {Z}}(m, \psi )\), with \({\mathcal {Z}}(m, \psi )\) the satisfaction span of \(\psi \) for m.

\({\mathcal {Z}}(m, \psi )\) computes all time points for which m satisfies \(\psi \). This computation may require to take elements from the match m into account but it is not bound by it. The temporal validity \({\mathcal {V}}(m, \psi )\) binds \({\mathcal {Z}}\) by \(\lambda ^m\): it computes the set of time points for which a match, besides being structurally present in the graph, satisfies \(\psi \). As an example, assume the query \((n, \lozenge _{[0,5]} \exists \,{\hat{n}})\). For a match m for n with \(\lambda ^m=[3,9)\), the satisfaction span \({\mathcal {Z}}(m, \lozenge _{[0,5]} \exists \,{\hat{n}})\) for a match \({\hat{m}}\) for \({\hat{n}}\) with \(\lambda ^{{\hat{m}}}=[3,6)\) is satisfied for \([-2,6)\)—according to Eq. 6. However, \({\mathcal {V}}(m,\lozenge _{[0,5]} \exists \,{\hat{n}})\) is equal to \(\lambda ^m \cap [-2,6)=[3,6)\). As the example showed, \({\mathcal {Z}}\) may contain negative time points (hence \({\mathbb {R}}\) is used in Definition 1), whereas \({\mathcal {V}} \subseteq {\mathbb {R}}_0^+\)—since \({\mathcal {V}}\) is produced by an intersection with \(\lambda ^m\). It also holds that \({\mathcal {V}}(m,\psi ) \subseteq {\mathcal {Z}}(m, \psi )\).

4.3 Answer set

Based on the temporal validity, we can proceed with a technical definition of the output of a query in \(\mathscr {L}_\mathrm {T}\), that is, its answer set \(\mathcal {T}\). In a RTM\(^{\mathrm{H}}\), all vertices and, thus, all matches have a lifespan. Moreover, application conditions of queries in \(\mathscr {L}_\mathrm {T}\) are formulas of an interval-based temporal logic which decides the truth of a formula for every time point in \({\mathbb {R}}\). Therefore, the answer set of a query in \(\mathscr {L}_\mathrm {T}\) contains matches, i.e., structural occurrences of a pattern, associated with a temporal validity, i.e., the time points for which the match exists and satisfies its application condition.

Definition 3

(answer set \(\mathcal {T}\) ) Given a pattern n, an MTGC \(\psi \), and an RTM\(^{\mathrm{H}}\) \(H_{[\tau ]} \), the answer set \(\mathcal {T}\) for a query \(\zeta :=(n, \psi )\) over \(H_{[\tau ]} \) is given by:

$$\begin{aligned} \mathcal {T} (H_{[\tau ]}) = \{(m, {\mathcal {V}}(m, \psi )) | m \text { is a match for } n \, \wedge {\mathcal {V}}(m,\psi ) \not = \varnothing \} \end{aligned}$$

Recall that \({\mathcal {V}}(m,\psi )\) may be an interval set. The answer set allows for a precise definition of the output of \(\mathscr {L}_\mathrm {T}\). In the remainder, we rely on this definition to explain the output of queries in the examples and, moreover, to define a restricted answer set for queries over a constrained RTM\(^{\mathrm{H}}\).

5 Operationalization of temporal graph queries

This section presents an operationalization framework that enables the incremental execution of a query \(\zeta \in \mathscr {L}_\mathrm {T} \). In InTempo, the activities described below are performed by Operationalization and Execution. Given \(\zeta \), Operationalization constructs an enhanced GDN which is capable of interval operations. Execution executes this GDN to find and update matches for \(\zeta \) in an RTM\(^{\mathrm{H}}\).

In the following, we refer to the framework in [18] (see Sect. 2.4) as base approach. The base approach considers graph queries in \(\mathscr {L}\) , i.e., where application conditions are formulated as NGCs, and thus does not support the temporal operators until and since and, in general, the temporal reasoning required by MTGL. Building on the base approach, we present our extensions to which we collectively refer as temporal approach or temporal GDN. The temporal GDN supports the formulation of an \(ac\) in MTGL and therefore enables incremental execution of queries in \(\mathscr {L}_\mathrm {T}\).

5.1 Marking rules of temporal GDN

Regarding marking rules, the main difference between the base approach and the temporal approach is that the RHS of rules of the temporal approach create a marking node that captures the duration of the match being marked. To this end, the type of marking nodes in the type graph is equipped with an attribute d of type interval set.

In detail, first, we adjust the concept of a regular marking rule of the base approach such that we obtain two variants: one where the RHS creates a marking node with a duration that coincides with the temporal validity \({\mathcal {V}}\) of the match being marked; and another where the RHS creates a marking node with a duration that coincides with the satisfaction span \({\mathcal {Z}}\) of the match being marked. The two rules are denoted by \(\mathcal {V}\)MR and \(\mathcal {Z}\)MR, respectively. Moreover, we introduce a new type of marking rule, denoted by \(\alpha \)MR, which allows for matching a varying number of nodes, a feature which is not supported by the base approach.

The temporal GDN incrementally updates in each execution step the set of all matches and potentially re-computes the duration of their marking nodes. Once created, marking nodes, together with the vertices they mark, remain in the RTM\(^{\mathrm{H}}\) in subsequent steps. We elaborate on each rule below.

Fig. 9
figure 9

Temporal GDN: network and marking rules for \(\zeta _1\)—the computation of the duration of the created marking node by network nodes \(N_1,N_{1.1},N_{1.2}\) is based on Definition 2, by \(\alpha _{1.1}\), \(\alpha _{1.2}\) on Eq. 8, and by \(\mathrm {U}\) on Eq. 6

The \({\mathcal {V}}\) Marking Rule A \(\mathcal {V}\)MR sets the d of a created marking node to the temporal validity of a match m, given by Definition 2—thus, the rule is intended for exists operators. The satisfaction span is represented by the duration of marking nodes of the dependencies of the rule. Since matched elements may be marking nodes, the computation of the lifespan of a match is slightly adjusted. For a matched element \(\epsilon \), its lifespan is given by \(\epsilon .d\) if \(\epsilon \) is a marking node, and by \([\epsilon .cts , \epsilon .dts )\) otherwise..

Figure 9 contains an example of a \(\mathcal {V}\)MR: the node \(N_{1.1}\) for \(\exists \,n_{1.1}\) from \(\psi _1:=\lnot \exists \,n_{1.1} \,\mathrm {U}_{[0,60]}\, \exists \, n_{1.2}\). Marking nodes are illustrated by a split circle where the bottom compartment contains the duration attribute.

Contrary to rules in the base approach, a \(\mathcal {V}\)MR includes marking nodes of dependencies in the pattern of the query rather than in the \(ac\). This is necessary as these marking nodes are included in computations of the duration of the marking node created by the RHS.

The \({\mathcal {Z}}\) Marking Rule The \(\mathcal {Z}\)MR is intended for the operators in Definition 1 which pass their match on to enclosing MTGCs and do not include matched elements in their computations, i.e., the operators negation, conjunction, until, and since. Therefore, the LHS of the \(\mathcal {Z}\)MR contains all elements of its closest enclosing \(\mathcal {V}\)MR. For example, the \(\mathrm {U}\) node and the negation node in Fig. 9 contain all elements in \(n_1\) The \(\mathcal {Z}\)MR ’s computation of d excludes these elements and only considers the satisfaction span of its dependencies, represented by the duration of marking nodes. The computation by the RHS depends on the operator—see Definition 1.

The \(\alpha \) Marking Rule In Eq. 5, the computation of the satisfaction span for exists relies on the union of all lifespans of matches that are compatible with the enclosed pattern \({\hat{n}}\). In order to compute this union, the corresponding node in the temporal GDN is required to keep track of all matches for \({\hat{n}}\). The number of these matches might vary in every GDN execution, a property which is not covered by conventional graph transformation rules—see Sect. 2.4.

Therefore, the node created for the exists operator is an amalgamated marking rule (\(\alpha \)MR). Such rules stem from amalgamated graph transformation [20], where an arbitrary number of parallel transformations are amalgamated, i.e., merged, into a single rule execution in one transformation step. Thus, an \(\alpha \)MR enables a node to be associated with a varying number of graph elements so that it can compute the union of their duration.

The LHS of \(\alpha \)MR contains a kernel of graph elements that are bound by the enclosing operator—in \(\psi _1\), that would be the s–and a multi-rule which matches an arbitrary number of instances of a certain marking node type. An \(\alpha \)MR thus groups the marking nodes matched by the multi-rule by matches of the kernel and aggregates the marking nodes’ duration. Hence, the \(\alpha \)MR corresponds to a GDN node with a single dependency to the node that creates the marking nodes which the \(\alpha \)MR groups.

The RHS of an \(\alpha \)MR creates an \(\alpha \) marking node which is connected to the marking nodes of its dependency (matched by the multi-rule) and the elements of the kernel marked by those marking nodes. Only a single \(\alpha \) marking node is created, regardless of the number of matches for the dependency. The duration of the \(\alpha \) marking node is the union of the duration of the marking nodes \(E^{M}\) matched by the multi-rule:

$$\begin{aligned} d^\alpha = \bigcup \limits _{\epsilon \in E^{M}} d^{\epsilon } \end{aligned}$$
(8)

See Fig. 9 for an example of an \(\alpha \)MR, the node named \(\alpha _{1.1}\), which groups matches of its dependency, the node \(N_{1.1}\).

5.2 Temporal GDN construction

In the previous section, we were concerned with types of GDN rules and the types of marking nodes they created. This section focuses on GDN nodes which form the components of the network and represent executions of GDN rules.

In InTempo, the construction of a temporal GDN from a query in \(\mathscr {L}_\mathrm {T}\) is performed by Operationalization—see Fig. 8. Compared to the base approach, the construction features two extensions: first, we translate the operators conjunction, negation, until, and since into GDN nodes corresponding to \(\mathcal {Z}\)MR; second, we translate the exists operator into a GDN node corresponding to a \(\mathcal {V}\)MR, a GDN node corresponding to an \(\alpha \)MR which is intended for grouping the matches of the node for the \(\mathcal {V}\)MR, and a positive dependency from the node for the \(\alpha \)MR to the node for the \(\mathcal {V}\)MR.

The construction is performed in two steps: (i) a traversal of the syntax tree of the MTGC of the query where operators are replaced by GDN nodes (ii) a traversal of the network created by the previous step to create the dependencies between nodes and set the patterns of \(\mathcal {Z}\)MR nodes.

See Fig. 9 for the temporal GDN of \(\zeta _1\), where novel nodes, i.e., created by our extensions to the base approach, are in light gray. The existential conditions in \(\psi _1\), i.e., \(\exists n_{1.1}\) and \(\exists n_{1.2}\), lead to the creation of nodes \(N_{1.1}\) and \(N_{1.1}\), respectively. These nodes are both dependencies of their respective \(\alpha \) nodes. The pattern \(n_{1.1}\) is enclosed by a negation which leads to the creation of negation node (“\(\lnot \)”) and a negative dependency between the negation node and \(\alpha _{1.1}\). The \(\mathrm {U}\) node is dependent on the negation node and \(\alpha _{1.2}\). The node \(N_{1}\), created for the pattern \(n_1\) of \(\zeta _1\), is dependent on the \(\mathrm {U}\) node. For more complex constructions, the instructions in [18] apply.

5.3 Temporal GDN execution

In InTempo, the execution of a temporal GDN is performed by Execution (see Fig. 8). The execution is recursive: given an input GDN node, the node’s dependencies are executed before the input node. Hence, although the execution starts at the root of the GDN, network leaves, which have no dependencies, are executed first.

An execution of the temporal GDN is performed after each modification to the RTM\(^{\mathrm{H}}\) and nodes start the pattern matching effort in the surrounding of elements affected by the modification—see local search in Sect. 2.4. The pattern matching for a node is skipped if the modification does not concern the node or one of its dependencies.

We demonstrate an execution of a temporal GDN via an example based on the network constructed for query \(\zeta _1\) (see Fig. 9) and two RTM\(^{\mathrm{H}}\) instances: \(H_{[5]}\) and \(H_{[7]}\) in Fig. 5.

Execution over \(H_{[5]}\) Each item in the following lists marks the execution of a GDN node. Each GDN node execution leads to the creation of marking nodes whose duration depends on the type of the GDN node—the type of each GDN node is shown between braces next to the node name. The execution traverses the network and starts with node \(N_{1.1}\). It continues upward with nodes whose dependencies have been executed.

  • \(N_{1.1}\) (\(\mathcal {V}\)MR) No matches for \(n_{1.1}\) are found.

  • \(\alpha _{1.1}\) (\(\alpha \)MR) The vertices s and pm \(_{1}\) are found but no marking nodes from its dependency \(N_{1.1}\). The marking node created by \(\alpha \)MR groups the duration of its dependency based on Eq. 8. Here, an empty duration is found, hence the empty interval is stored into its attribute \(d_{1.1}^\alpha \).

  • \(\lnot \) (\(\mathcal {Z}\)MR) The LHS of the rule is the same as \(n_1\). The node finds one match (vertices s and pm \(_{1}\)) and one marking node for its dependency, \(\alpha _{1.1}\). The duration \(d^\nu \) of the node is computed according to Eq. 3: \({\mathbb {R}} \setminus d_{1.1}^\alpha = {\mathbb {R}} \setminus \varnothing = {\mathbb {R}}\).

The negation node is one of the two dependencies of node \(\mathrm {U}\). Before proceeding with node \(\mathrm {U}\), the execution requires that the other dependency is executed as well, hence, via recursion, it proceeds with node \(N_{1.2}\):

  • \(N_{1.2}\) (\(\mathcal {V}\)MR) One match is found, containing s, pm \(_{1}\), d \(_{1}\), and the edges among them. One marking node is created with duration \([5,\infty )\), which coincides with the temporal validity of the match computed according to Definition 2.

  • \(\alpha _{1.2}\) (\(\alpha \)MR) One match for s and pm \(_{1}\) is found as well as a marking node of type \(N_{1.2}\)—the one created for node \(N_{1.1}\). One marking node is created whose duration aggregates the lifespans of its dependencies. Computed based on Eq. 8, the lifespan \(d_{1.2}^\alpha \) is equal to \([5,\infty )\).

Since both of its dependencies have been executed, the execution can now proceed with node \(\mathrm {U}\):

  • \(\mathrm {U}\) (\(\mathcal {Z}\)MR) The LHS of the rule is the same as \(n_1\). One match for s and pm \(_{1}\) is found and two marking nodes: one for the left operand, i.e., the negation node, and one for the right operand, i.e., \(\alpha _{1.2}\). For each interval in the duration of the marking node \(d_{1.2}^\alpha \) it is checked whether it is adjacent to or overlapping with any of the intervals in the duration of the negation node. The interval \({\mathbb {R}}\) from \(d^\nu \) and the interval \([5,\infty )\) from \(d_{1.2}^\alpha \) are indeed overlapping. Therefore, the duration \(d_\mathrm {U}\) of the marking node created by \(\mathcal {Z}\)MR is computed according Eq. 6 which returns \([-55,\infty )\).

  • \(N_{1}\) (\(\mathcal {V}\)MR) An s and a pm \(_{1}\) are matched. Their lifespan is \([4,\infty )\). One marking node with duration \(d_\mathrm {U}\) exists; effectively, this duration is the satisfaction span of the \(ac\) of the query. The temporal validity of the match is the intersection of the lifespan of the match with the duration, i.e., \([4,\infty ) \cap [-55,\infty )=[4,\infty )\).

The executed query returns a \(\mathcal {T}\) which contains a match for \(n_1\) associated with the interval \([4,\infty )\) during which, besides being structurally present in the graph, the match satisfies the temporal requirements expressed by \(\psi _1\).

Execution over \(H_{[7]}\) The event \(e_7\) yields a new RTM\(^{\mathrm{H}}\) instance, i.e., \(H_{[7]}\)—see Fig. 5. The event corresponds to the deletion of d \(_{1}\) and the creation of pm \(_{2}\) as well as an edge between pm \(_{2}\) and s. InTempo is invoked and Execution executes the temporal GDN:

  • \(N_{1.1}\) (\(\mathcal {V}\)MR) No match for \(n_{1.1}\) is found.

  • \(\alpha _{1.1}\) (\(\alpha \)MR) A new match for the kernel exists (the one involving pm \(_{2}\)) whose duration is also the empty interval. The match found in the previous execution remains unchanged.

  • \(\lnot \) (\(\mathcal {Z}\)MR) Recall that the node’s LHS is the same as \(n_1\). A new match is found for s and pm \(_{2}\) whose duration is computed to be \({\mathbb {R}}\), similarly to the previous execution over \(H_{[5]}\). The match found in the previous execution remains unchanged.

  • \(N_{1.2}\) (\(\mathcal {V}\)MR) No new matches are found, however, the duration of the match already stored is updated to [5, 7).

  • \(\alpha _{1.2}\) (\(\alpha \)MR) A new match with s and pm \(_{2}\) is found for the kernel with an empty duration as no corresponding marking node exists. The duration of the match found in the previous execution, i.e., \(H_{[5]}\), is re-computed to [5, 7) to reflect the change in the duration of its dependent marking node in the previous phase.

  • \(\mathrm {U}\) (\(\mathcal {Z}\)MR) A new match is found by its LHS. This match is associated, via the matches stored by the right operand for the until, with an empty interval. The computation of the duration for this match is similarly an empty interval. The duration of one of the marking nodes associated with the match found in the previous execution has been updated: this triggers an update of the duration stored by the node for the match in question. The result of the re-computation is \([-55,7)\).

  • \(N_{1}\) (\(\mathcal {V}\)MR) A new match is found. The satisfaction span for this match is an empty interval and, hence, its temporal validity is also empty. The temporal validity of the previously found match for \(n_1\) is re-computed to [4, 7).

InTempo returns a \(\mathcal {T}\) which has been incrementally computed and contains one match for \(n_1\)—the one involving pm \(_{1}\). The match involving pm \(_{2}\), although structurally present, does not satisfy the MTGC in \(\zeta _1\)—the match has an empty temporal validity and is hence excluded from \(\mathcal {T}\). The temporal validity of the match involving pm \(_{1}\) has been updated to reflect the change to d \(_{1}\) by \(e_7\).

6 Constraining memory consumption of an RTM\(^{\mathrm{H}}\)

As noted previously, an RTM\(^{\mathrm{H}}\) maintains two views on the state of the modeled system. Similar to traditional RTMs, one view is of the current system state, as the RTM\(^{\mathrm{H}}\) contains all system entities present in the system. The other view, which extends traditional RTMs, is that of the entire history of the state. The view of the entire history is enabled by (i) creation (cts) and deletion (dts) timestamps (ii) featuring vertices which have been deleted from the modeled system, the deletion being reflected in the dts of the vertices in question.

This wealth in insight comes with a price which in certain cases might be problematic. On the one hand, regulations originating in the context of the system, such as retention policies for privacy-sensitive patient data in healthcare, e.g., [85, 86], may require that certain data has to be discarded after a certain period. On the other hand, remembering each system change causes the RTM\(^{\mathrm{H}}\) to constantly grow in size. The growth rate may need to be curbed to reduce the memory consumption or to avoid cluttering the model with obsolete data which may deteriorate performance of the pattern matching—as constantly more elements have to be considered.

For these cases, the remedy lies in knowing when and what to forget. In this section we present the functionality of InTempo which allows for the optional constraining of the history representation. This constrained representation contains all elements that are relevant to query executions and, compared to the unconstrained representation, may afford increased memory efficiency.

6.1 RTM\(^{\mathrm{H}}\) maintenance based on time window

In case the dts of an element is equal to \(\infty \), the element is still present in the modeled system and thus, due to causal connection, may not be removed from the RTM\(^{\mathrm{H}}\). This is not true for elements whose \(dts \not =\infty \) as those have been removed from the modeled system. Nevertheless, because of the presence of temporal operators and their timing constraints, elements might be relevant to query executions for a certain period even after they have been deleted from the modeled system. When this period elapses, deleted elements may be considered for removal from the RTM\(^{\mathrm{H}}\).

For example, let \(\zeta _2:=(n_{1.1}, \psi _2)\) be a query with \(\psi _2:=\lozenge _{[2,5]} \exists \, n_{1.2}\)—as for \(\zeta _1\) it is assumed that the system tracks time in seconds hence time units represent seconds. Assume that \(\zeta _2\) is executed over \(H_{[7]}\) in Fig. 5. The vertex d \(_{1}\) has been deleted yet it is relevant to the execution of the query. Given the timing constraint of the temporal operator in \(\zeta _2\), the same would stand for an \(H_{[10]}\): although deleted, d \(_{1}\) might still be involved in the query execution. The involvement of d \(_{1}\) is final at a time point \(\tau \), only for RTM\(^{\mathrm{H}}\) instances whose associated timestamp exceeds the sum of the right end-point of the timing constraint of \(\lozenge _{[2,5]}\) and \(\tau \).

Given a (finite) set of MTGCs \(\varPsi \), we determine the time window \({\mathcal {W}}\), i.e., the period for which the involvement of deleted elements in query executions is not final as follows. First, we compute w for an MTGC \(\psi \in \varPsi \):

$$\begin{aligned} w(\psi ) = {\left\{ \begin{array}{ll} r(I) + \max {(w(\chi ), w(\omega ))} &{}\text {if } \psi = \chi \mathrm {U}_{I} \,\omega \\ r(I) + \max {(w(\chi ), w(\omega ))} &{}\text {if } \psi = \chi \mathrm {S}_{I} \,\omega \\ \max {(w(\chi ), w(\omega ))} &{}\text {if } \psi = \chi \wedge \omega \\ w(\chi ) &{}\text {if } \psi = \lnot \chi \\ w(\chi ) &{}\text {if } \psi = \exists (n, \chi )\\ 0 &{}\text {if } \psi = {true } \\ \end{array}\right. } \end{aligned}$$
(9)

For \(\varPsi \), the time window is given by:

$$\begin{aligned} {\mathcal {W}}(\varPsi ) = \max _{\psi \in \varPsi }{\,w(\psi )} \end{aligned}$$
(10)

If the constrained representation option is enabled, InTempo performs the computation of \({\mathcal {W}}\) during Operationalization. Maintenance is performed after Execution and resembles garbage collection, i.e., it prunes all elements \(\epsilon \) in an RTM\(^{\mathrm{H}}\) whose dts exceed a certain threshold. Maintenance yields a pruned RTM\(^{\mathrm{H}}\), denoted by P and defined practically as follows.

$$\begin{aligned} P_{[\tau _i]} = {\left\{ \begin{array}{ll} H_{[\tau _i]} &{} \text {if } i = 1\\ \{\epsilon \,| \, \epsilon \in H_{[\tau _i]} \wedge \epsilon .dts \ge \tau _{i-1} - 2{\mathcal {W}}\} &{} \text {otherwise} \end{array}\right. }\nonumber \\ \end{aligned}$$
(11)

where \(\tau _{i}\) is the time point of the event e with index i in \({\bar{h}}_\tau \) (see Sect. 2.2) and \(H_{[\tau _i]}\) is the RTM\(^{\mathrm{H}}\) corresponding to the coalescence of the e at \(\tau _i\) and \(H_{[\tau _{i-1}]}\). The pruning threshold \(\tau _{i-1} - 2{\mathcal {W}}\) spans a period for which deleted vertices are relevant to the query execution at \(\tau _i\), i.e., \([\tau _{i-1}-{\mathcal {W}}, \tau _{i}]\). However, to ensure correctness of results, the threshold also covers a preceding period, i.e., \(\tau _{i-1}-2{\mathcal {W}}\), which is motivated in the following section.

6.2 Projected answer set and intended setting

An \(H_{[\tau ]}\) constitutes a complete representation of a history of events \({\bar{h}}_\tau \), i.e., it contains all changes from the beginning of the history up to and including the time point \(\tau \). The removal of elements from \(P_{[\tau ]}\) renders the representation of \({\bar{h}}_\tau \) partial. In the following it is shown that, over the course of a history, i.e., the sequence of events, all changes to the temporal validity \({\mathcal {V}}\) of a match can be detected in both representations. However, the \({\mathcal {V}}\) returned by constrained representations is restricted to a certain period to correspond to the representation being partial. This restriction is captured by a projected answer set \(\mathcal {T}^{\pi }_{}\). Intuitively, a \(\mathcal {T}^{\pi }_{}\) projects the \({\mathcal {V}}\) of a match at \(\tau _i\), i.e., \({\mathcal {V}}_{\tau _{i}}\), on a period in which \({\mathcal {V}}\) might have changed (captured by \({\mathcal {W}}\)) since the previous event at \(\tau _{i-1}\), that is, the interval \(\varGamma = [\tau _{i-1}-{\mathcal {W}}, \tau _i]\).

Definition 4

(projected answer set \(\mathcal {T}^{\pi }_{}\) ) Given an answer set at time point \(\tau _i\) over an RTM\(^{\mathrm{H}}\) \(H_{[\tau _i]}\), denoted by \(\mathcal {T} _{\tau _i}(H_{[\tau _i]})\), the projected answer set \(\mathcal {T}^{\pi }_{\tau _i} (H_{[\tau _i]})\) is defined as follows.

$$\begin{aligned} \mathcal {T}^{\pi }_{\tau _i} (H_{[\tau _i]})= {\left\{ \begin{array}{ll} \mathcal {T} _{\tau _i}(H_{[\tau _i]}) &{} \text {if } i = 1\\ \{(m,{\mathcal {V}} \cap \varGamma ) \,| \, (m, {\mathcal {V}}) \in \mathcal {T} _{\tau _i}(H_{[\tau _i]}) \} &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

\(\mathcal {T}^{\pi }_{}\) is defined identically for a pruned RTM\(^{\mathrm{H}}\) P.

For a match \((m, {\mathcal {V}})\), all modifications corresponding to an event that could have affected its \({\mathcal {V}}_{\tau _{i}}\) are within \([\tau _{i}-{\mathcal {W}}, \tau _{i}]\). \(\mathcal {T}^{\pi }_{\tau _{i+1}} \) contains this interval as well as the interval \([\tau _{i}, \tau _{i+1}]\). Therefore, with respect to \(\tau _{i+1}\), all time points in \({\mathcal {V}}\) before \(\tau _{i}-{\mathcal {W}}\) are final, i.e., no modification after \(\tau _{i+1}\) could affect them. This fact also motivates the pruning threshold: deleted elements are kept in P as long as their dts is larger or equal to the earliest time point for which a modification could affect the earliest time point for which \({\mathcal {V}}\) is not final.

See Fig. 10 for an illustration. The \(\mathcal {T}^{\pi }_{\tau _{i}}\) (the dotted grid) returns a match \((m, {\mathcal {V}}_{\tau _{i}})\) where \({\mathcal {V}}_{\tau _{i}}\) contains all those time points which are not final with respect to \(\tau _{i}\). Analogously, so does \(\mathcal {T}^{\pi }_{\tau _{i+1}}\) (in gray) with \({\mathcal {V}}_{\tau _{i+1}}\). At \(\tau _{i+1}\) however, all time points before the pruning threshold \(\tau _{i} - 2{\mathcal {W}}\) (that were non-final in \(\mathcal {T}^{\pi }_{\tau _{i}}\)) have become final. An aggregation of \(\mathcal {T}^{\pi }_{\tau _{i}}\) and \(\mathcal {T}^{\pi }_{\tau _{i+1}}\) would still contain final time points in \({\mathcal {V}}_{\tau _{i}}\) even if the pruning of an element at \(\tau _{i+1}\) would cause these time points to be excluded from \({\mathcal {V}}_{\tau _{i+1}}\).

Fig. 10
figure 10

Projected temporal validity of a match obtained for two consecutive time points \(\tau _{i}\) and \(\tau _{i+1}\)

Theorem 2

Let \({\bar{h}}_\tau \) be a history, \({\mathcal {D}}\) be the last index of \({\bar{h}}_\tau \), and \(\zeta :=(n, \psi )\) with \(\zeta \in \mathscr {L}_\mathrm {T} \). Moreover, let HP be a complete and a pruned RTM\(^{\mathrm{H}}\), respectively. Over the history, \(\mathcal {T}^{\pi }_{} (H)\) is equal to \(\mathcal {T}^{\pi }_{} (P)\), that is:

$$\begin{aligned} \overset{{\mathcal {D}}}{\underset{i=1}{\bigcup }} \mathcal {T}^{\pi }_{\tau _i} (H_{[\tau _i]}) = \overset{{\mathcal {D}}}{\underset{i=1}{\bigcup }} \mathcal {T}^{\pi }_{\tau _i} (P_{[\tau _i]}) \end{aligned}$$
(12)

Proof

(sketch) By induction over \({\mathcal {D}}\). See Sect. B.2 for the proof. \(\square \)

Since the history captured in P is partial, queries over P can compute the temporal validity of matches only for a restricted period of time—captured by the projected answer set. This is in contrast to queries over an un-pruned RTM\(^{\mathrm{H}}\) H where the temporal validity of matches is computed for the entire history. The loss of information in P compared to H is a trade-off for the potential of increased memory-efficiency and faster query execution times over P.

Moreover, the deletion of elements may result into matches being removed from P; therefore, P is intended for use-cases where matches are only relevant for a short period of time after being returned, e.g., in self-adaptation where a match constitutes an adaptation issue that is to be fixed as soon as possible.

6.3 Maintenance with dynamic sets of queries

InTempo seamlessly supports non-fixed, i.e., dynamic, sets of queries over a complete RTM\(^{\mathrm{H}}\). The following functionality enables their support by a pruned RTM\(^{\mathrm{H}}\).

Upon an invocation of InTempo, the Operationalization checks whether the set of queries has been altered and, if yes, re-computes \({\mathcal {W}}\). If \({\mathcal {W}}\) hasn’t changed or has been decreased, the invocation proceeds as usual by executing the queries. Else, if \({\mathcal {W}}\) has increased, the derivation of a complete answer set for the incoming queries cannot be ensured for a period of time that is equal to the difference of the previous value of \(2{\mathcal {W}}\) to the newly computed one. In this case, the queries for which a complete \(\mathcal {T}\) cannot be guaranteed, are not admitted for execution until the length of the history represented by the RTM\(^{\mathrm{H}}\) suffices for their \(\mathcal {T}\).

For example, assume InTempo is first executed for the previously introduced query \(\zeta _2\) with \(\varPsi =\{\psi _2\}\) where \({\mathcal {W}}=5\). Later on, the query \(\zeta _1\) is added to the input queries which leads to \(\varPsi =\{\psi _1, \psi _2\}\). The time point of addition is marked by \(\tau \) of the RTM\(^{\mathrm{H}}\) \(H_{[\tau ]}\). The alteration induces a recomputation of \({\mathcal {W}}\) to \({\mathcal {W}}'=60\). The value has been increased therefore, based on \(\tau \) as well as the difference \(2{\mathcal {W}}'-2{\mathcal {W}}\), \(\zeta _1\) is admitted for execution only when an RTM\(^{\mathrm{H}}\) \(H_{[\tau ']}\) is induced with \(\tau ' - \tau \ge 2{\mathcal {W}}'-2{\mathcal {W}}\), i.e., in this case, at least 110 time units after its addition.

6.4 Considerations

In most cases, a pruned RTM\(^{\mathrm{H}}\) would be more memory-efficient than a complete one. In fact, against a constant rate of incoming events, pruning would yield an RTM\(^{\mathrm{H}}\) whose memory consumption would be bounded. However, if the event rate does not have a fixed upper bound, the memory consumption, although mitigated by pruning, would still increase over time.

In the case of MTGCs with an unbounded past operator, i.e., with \(r(I) = \infty \), an RTM\(^{\mathrm{H}}\) cannot be pruned as it is obvious that the temporal requirement refers to the entire history. In the case of an unbounded future operator, the MTGC may be non-monitorable [see [87], i.e., satisfaction of an MTGC may depend on the entire infinite future of the execution thus it can never be provided. Monitoring such formulas is achieved via a practical solution, e.g., the replacement at runtime of the right end-point of the operator with the largest time point of the sequence, which affords a verdict on the satisfaction at any point of monitoring.

Pruning may reduce the size of the matching search space and thus may improve the pattern-matching time of the Execution operation. On the other hand, Maintenance performs two more tasks that should be taken into consideration with respect to the overall performance. First, it uses a priority queue of deleted elements in the RTM\(^{\mathrm{H}}\) and iteratively polls the queue to detect and prune elements whose dts exceeded the threshold. Second, every time an element is pruned, Maintenance re-computes the matches maintained by the temporal GDN to detect whether pruned elements have affected any matches and, if so, ensure that affected matches are re-computed based on the latest RTM\(^{\mathrm{H}}\).

7 Runtime monitoring with temporal graph queries

InTempo processes a sequence of events which represents an ongoing system behavior and, during query execution, checks whether the observed sequence (captured in the RTM\(^{\mathrm{H}}\)) satisfies a temporal logic formula (captured in the MTGC of the query). This functionality resembles the monitoring approach known as Runtime Verification (RV) [see [7]. In this section, we discuss the application of InTempo for RV.

7.1 Temporal graph queries for temporal properties

RV represents the system behavior by sequences of states or events at some level of abstraction. An online algorithm is then used to check whether (some prefix of) the sequence satisfies a given property [9], i.e., a formal statement. RV focuses on the verification of temporal safety properties [1], i.e., statements of the form “something bad should never happen” which in temporal logic is expressed by prefixing each formula with the always operator [62], i.e., the abbreviation of \(\lnot \lozenge _{[0,\infty )}\lnot \psi \) in MTGL with \(\psi \) an MTGC. RV searches for violations of such properties which, on a finite sequence, can always be detected as soon as they occur [72].

In the context of graphs, a temporal safety property which contains no temporal operators corresponds to a graph query in \(\mathscr {L}\)—see Sect. 2.4. The verification of such properties corresponds to updating a host graph based on a sequence of events and, upon each update, searching the graph for matches of the query pattern [24]. If a match is found, then the property is violated.

Properties with temporal operators require the tracing of a pattern in the host graph over time, i.e., over multiple updates. By the incorporation of a temporal logic such as MTGL into \(\mathscr {L}_\mathrm {T}\) and the consolidation of updates into an RTM\(^{\mathrm{H}}\), temporal graph queries in \(\mathscr {L}_\mathrm {T}\) are capable of specifying temporal properties where any matches in an RTM\(^{\mathrm{H}}\) returned by InTempo constitute violations.

As an example, recall the query \(\zeta _1 :=(n_1, \psi _1)\) with \(\psi _1:=\lnot \exists \,n_{1.1} \,\mathrm {U}_{[0,60]}\, \exists \, n_{1.2}\). In order for matches of \(\zeta _1\) to return the periods for which there is a violation, that is, there are admitted patients that are not prepared for treatment within the designated time or, until they are prepared, they are mistakenly re-triaged, the MTGC of the query needs to be negated, i.e., \(\zeta '_1:=(n_{1},\lnot \psi _1)\). Executed over \(H_{[7]}\) (Fig. 5), \(\zeta '_1\) returns one match: the one involving pm \(_{2}\) is associated with \([7,\infty )\)—the duration of until is empty, its negation is \({\mathbb {R}}\), and hence the temporal validity \({\mathcal {V}}\) is computed by \([7, \infty ) \cap {\mathbb {R}}\). It can be inferred that the procedure for patient pID=2, i.e., pm \(_{2}\) does not conform to the guideline—since its \({\mathcal {V}}\) is non-empty. Moreover, the procedure is violated from time point 7 onward.

The example above demonstrates an advantage of InTempo over traditional RV solutions, as RV solutions typically return a value from the Boolean domain, e.g., true or false, or, in cases with enhanced expressiveness, some value related to the Boolean domain, e.g., true, probably true, false [12]. Conversely, owing to the computations in Definition 1, InTempo returns the duration for which the match is valid, i.e., the violation occurs, which is arguably a more intuitive monitoring outcome and can be further utilized by the system.

A temporal graph query can be obtained for all MTGCs. For an MTGC \(\psi \) where the topmost condition is not an existential quantification, a query can be obtained by wrapping \(\psi \) in a query looking for an empty pattern, e.g., \((\emptyset , \lnot \psi )\). An answer set for \((\emptyset , \lnot \psi )\) would consist of an empty match and a temporal validity which would mark the duration for which \(\psi \) is violated.

7.2 Postponing a decision

In the previous example, where \(\zeta '_1\) is executed over \(H_{[7]}\), InTempo returns a match for pm \(_{2}\) , i.e., a violation, although, based on the interval of \(\lozenge _{[0,60]}\), the object may potentially satisfy \(\psi _1\) in the future—for example, an addition of an instance of DrugService with the appropriate pID could occur in the next few seconds.

For \(H_{[7]}\), the scheme makes a decision based on what has been already observed. Typically, in RV, a decision is not made unless conclusive, i.e., there is no possible future that could satisfy the property. A given RTM\(^{\mathrm{H}}\) is sufficient for making a conclusive decision for a temporal property which concerns the past. This is not the case for temporal operators which concern the future where, in order to be conclusive, the decision would have to be postponed. This practice may be particularly useful in systems where humans interoperate with a software system (as in the SHS case-study) and the system acts as a safety net, i.e., only if other planned actions have not been taken within a certain period.

In this section, we extend Execution to return only those matches whose temporal validity has become final since the last time the operation was performed. This functionality is enabled by equipping Execution with a predicate defined as follows.

Definition 5

(\({\mathcal {P}}\)) Let \((n,\lnot \psi )\) be a query with n a pattern and \(\psi \) an MTGC. Moreover, let \({\mathcal {W}}\) be the time window of \(\psi \) and \({\mathcal {V}}(m,\lnot \psi )\) be a non-empty temporal validity of a match m for n in \(H_{[\tau ]}\) with \(\tau \in {\mathbb {T}}\). Then, the predicate \({\mathcal {P}}_{m,\tau }\) is defined as:

$$\begin{aligned} {\mathcal {P}}^m_{\tau _i} = {\left\{ \begin{array}{ll} {true } &{} \text {if } {\mathcal {V}}(m, \lnot \psi ) \cap [\tau _{i-1}-{\mathcal {W}}, \tau _{i}-{\mathcal {W}}] \not =\varnothing \\ \textit{false} &{} \text {otherwise}\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(13)

Intuitively, a true \({\mathcal {P}}^m_{\tau _i}\) means that there is a time point or more for which the temporal validity has become final with respect to \(\tau _{i}\). The Execution operation checks all matches with a non-empty \({\mathcal {V}}\) and returns only those for which \({\mathcal {P}}^m_{\tau _i}\) is true. A decision on time points for which \({\mathcal {V}}\) is not final is postponed for the next event.

Equipped with \({\mathcal {P}}^m_{\tau _i}\), an invocation of InTempo for \(H_{[7]}\) finds the match involving pm \(_{2}\) and the associated interval \([7,\infty )\). However, the predicate for the match is computed to be false, i.e., intuitively, \(\psi _1\) could be satisfied in the future of \(H_{[7]}\), and thus InTempo does not return the match. Had there been no drug service added for that patient in future instances of the RTM\(^{\mathrm{H}}\), the match would have started being returned in all RTM\(^{\mathrm{H}}\) instances \(H_{[\tau ']}\) with \(\tau '\ge 67\).

Fig. 11
figure 11

Overview of InTempo and system interaction for adaptation

The predicate may impose a delay on the detection of some violations which, however, is inherent in reactive monitoring and, in practice, it is often handled by an appropriately timed periodic event generated by the monitor.

In the context of the SHS and similar systems, this functionality enables waiting for a planned action to be taken by a clinician and only detecting a violation after said period has elapsed. The actual period can be set such that the detection serves as a warning, i.e., it anticipates an omission and allows for some time where the action can still be taken, either by clinicians or smart devices of the SHS.

8 Application scenario: runtime adaptation

This section applies InTempo in a self-adaptation scenario which utilizes temporal graph queries and the history-awareness of an RTM\(^{\mathrm{H}}\).

8.1 Self-adaptation based on RTMs

Self-adaptive systems are able to modify their own behavior or structure in response to their perception of their context, the system itself, and their requirements [31]. Self-adaptation can be generally achieved by adding, removing, and re-configuring components as well as connectors among components in the system architecture [77], therefore, the architecture view is typically considered an appropriate abstraction level, e.g., [46, 47]. Causal connection renders RTMs a natural choice for representing the system architecture, as adaptations can be realized as changes on the RTM which are subsequently mirrored in the system [see [107].

An established method of instrumenting self-adaptation is to equip the system with an external feedback control loop, i.e., an adaptation engine. An established reference model for the design of an adaptation engine is the MAPE-K feedback loop [66]. The MAPE-K loop monitors and analyzes the system and, if needed, plans and executes an adaptation of the system, where the adaptation is defined in terms of architecture changes. All four MAPE activities (whose first letter is underlined above) are based on knowledge. The feedback loop maintains an RTM as part of its knowledge to represent the current state of the architecture. Thus, the activities of the MAPE-K feedback loop operate on the RTM to perform self-adaptation.

Fig. 12
figure 12

History-aware adaptation engine

A self-adaptive system can be adapted via adaptation rules. Adaptation rules represent fine-grained units of change that can be performed on the underlying system. The execution of an adaptation rule adapts the system from its current state to a new state. Adaptation rules are of the form: if condition then action. A condition checks whether an adaptation issue is present, whereas an action describes a desired adaptation. If the condition is met, the action is taken. The feedback loop captures changes (during Monitor); checks whether changes cause an adaptation issue (during Analyze); and, if the condition is satisfied, plans and executes an adaptation action (during Plan and Execute, respectively) [73].

The graph-based encoding of RTMs allows for a realization of adaptation rules in form of graph transformation rules where adaptation issues are expressed via graph patterns which in turn characterize graph queries, i.e., the LHS of the rule. Graph queries are executed during analysis and the RTM is adapted via in-place graph transformations based on the RHS of the rule [47].

8.2 History-aware self-adaptation via InTempo

An RTM\(^{\mathrm{H}}\) captures the current state of an RTM as well as temporal information on when changes to said RTM occurred. Furthermore, temporal graph queries are capable of formulating adaptation rules whose conditions include temporal requirements on the history of the system structure. By replacing the RTM in a MAPE-K loop with an RTM\(^{\mathrm{H}}\), InTempo can be used to execute temporal graph queries which, in this context, correspond to checking adaptation conditions, over a sequence of events which represent changes on the architecture. The query answers can be used to plan and execute adaptations. Therefore, InTempo may serve as the basis for an adaptation engine that enables history-aware self-adaptation—see Fig. 11.

Figure 12 depicts a detailed view on an adaptation based on the MAPE loop. The engine operates in two phases: the setup and the (self-adaptation) loop. Operationalization of InTempo (the trapeze shape containing “O” in Fig. 12) is performed during setup. Execution (“E”) is performed in the Analyze activity. The engine features a Maintain activity—an extension compared to a canonical MAPE loop—during which Maintenance of InTempo (“M”) is performed.

8.3 Self-adaptation scenario for SHS

In the following, we build on the SHS introduced in Sect. 2.1 to envisage a (self-)adaptation scenario that enacts a medical instruction. The instruction imposes temporal requirements on the operation of the SHS which are checked and enforced by the five activities of the adaptation loop described below. The scenario is based on the medical guideline on the treatment of sepsis [78, 89], a potentially life-threatening condition. We focus on the basic instruction that reads: “between ER Sepsis Triage and IV Antibiotics should be less than 1 hour” [78]. Note that, from the point-of-view of the system, this timing constraint in the instruction is soft, as medical guidelines often provide contingency plans in case a deadline is inadvertently missed, e.g., [110, p. 11]. Therefore, a system adaptation could occur after the deadline is missed and still remedy the situation.

The event log in [78] contains real medical records from a hospital where patients diagnosed with sepsis were treated according to the guideline. The log contains a multitude of events. We focus on those that correspond to actions prescribed in the guideline, i.e., ER Sepsis Triage, IV Antibiotics, and Release, which correspond to a patient being triaged as an emergency sepsis case, an intravenous (IV) administration of antibiotics for sepsis, and a patient being released from the emergency ward, respectively. Events in the log are timestamped. Based on the SHS metamodel (Fig. 2) and the available hospital log, we envisage the procedure described in the guideline enacted by the SHS.

In detail, an ER Sepsis Triage event is simulated as a Probe with status sepsis, generated from an instance of PMonitoringService pm which has been invoked by an SHSService s. An IV Antibiotics event is simulated as a Probe with status anti from a DrugService d which has also been invoked by s. The patterns capturing the occurrence of these events in our SHS are depicted in Fig. 13. To make sure these two actions are referring to the same patient, \(g_{1.1}\) contains an attribute constraint (in braces) that checks whether the pID of d and pm are equal.

Fig. 13
figure 13

Patterns for self-adaptation in SHS—the usage of the same label for vertices denotes that vertices refer to the same element in the RTM\(^{\mathrm{H}}\)

Based on \(g_1\) and \(g_{1.1}\) in Fig. 13, the instruction is formulated in \(\mathscr {L}_\mathrm {T}\) by the query \(\text {MG1}:=(g_1, \lnot \psi _{\text {MG1}})\) with \(\psi _{\text {MG1}}\) the MTGC \(\lozenge _{[0,3600]} \exists \, g_{1.1}\). That is, the query searches for matches of \(g_1\), which identifies a (previously untreated) patient with sepsis, for which, in the next hour, there is no match for pattern \(g_{1.1}\), which identifies the administration of antibiotics to the same patient. The binding of elements in \(g_{1.1}\) from \(g_1\) is illustrated in Fig. 13 by using the same labels for vertices, e.g., for pm. The system is assumed to track time in seconds.

In order to challenge our scheme with a more complicated scenario, we also search for violations for a variation of MG1. Namely, that no patient with sepsis should be released prior to being treated, a requirement that resembles once more conformance checks and refers to Release events in the original log. The requirement is captured by the query \(\text {MG2}:=(g_1, \lnot \psi _{MG2})\) with \(\psi _{MG2}:=\lnot \exists g_{1.2} \,\mathrm {U}_{[0,3600]} \exists \, g_{1.1}\). The pattern \(g_{1.2}\) is similar to \(g_1\) and depicted in Fig. 13. We describe a desired adaptation loop for the SHS according to these instructions.

Monitor During the monitoring activity, the recent events (new readings captured by Probes since the last invocation of the loop) together with their cts and dts values are reflected in the RTM\(^{\mathrm{H}}\), which is an instantiation of the Architecture. Therefore, the RTM\(^{\mathrm{H}}\) is updated to represent the current system structure enriched with the relevant temporal data.

Analyze The activity detects adaptation issues. In this context, these are captured by violations of \(\psi _{MG1}\), i.e., the existence in the RTM\(^{\mathrm{H}}\) of structural patterns that reflect sepsis cases (\(g_1\)) without associated antibiotics (\(g_{1.1}\)) within one hour or, similarly, for \(\psi _{MG2}\). Hence we execute (separately) the queries MG1 and MG2. Note the similarity with the monitoring of temporal properties in Sect. 7.1. Technically, the detection is based on the execution of the temporal GDN which has been obtained by Operationalization during the setup of the engine. In the context of an SHS, self-adaptation only supports the medical procedure, i.e., it first waits for clinicians to perform the actions in the guideline. Only when there is no more time is self-adaptation enabled. Thus, InTempo is equipped with the \({\mathcal {P}}\) predicate in Definition 5 to return only matches whose temporal validity is final.

The matches returned by InTempo in this activity constitute adaptation issues, and similar to [47], adaptation-related types (Annotation in Fig. 2) are used to facilitate the adaptation. During Analyze, the PMonitoringService instance which has been involved in the detection of an adaptation issue is annotated with an Issue instance. Therefore, to ensure that only new violations are matched, \(g_1\) is extended to check that no instance of Issue is associated with the matched instance of PMonitoringService. Issue instances, as well as instances of other adaptation-related nodes in Fig. 2, are created by transformation rules which respect the RTM\(^{\mathrm{H}}\) and are, therefore, capable of setting their cts and dts appropriately.

Plan and Execute In planning, the engine searches for sepsis Probes annotated with an instance of Issue. Upon finding them, it attaches an Effector to the service to which the Probe instance is attached. The Execute activity of the loop searches for effectors and upon finding them takes an adaptation action, i.e., administer antibiotics to the patient via a DrugService instance. This adaptation action is also reflected in the RTM\(^{\mathrm{H}}\) by creating an AdaptationAction instance which is associated to the handled Issue instance.

Maintain This activity is optional. If enabled, Maintenance uses the time window obtained by Operationalization during setup and prunes the RTM\(^{\mathrm{H}}\) , i.e., it removes all elements that have been deleted and their involvement in query executions is final. Following the removal of elements, the GDN is re-executed to update matches.

9 Evaluation

This section presents an implementation of InTempo which is evaluated based on a simulation of the SHS case-study and a benchmark for graph-based technologies which simulates the operation of social network. Implementation details are presented in Sect. 9.1. In Sect. 9.2, the implementation is integrated in an adaptation engine, as shown in Fig. 12, and evaluated on the SHS adaptation scenario presented in Sect. 8.2, where adaptation issues are detected by the execution of temporal graph queries over real and synthetic data. Query execution times are compared to an RV tool and a time-aware model indexer in Sect. 9.3. In Sect. 9.4 we evaluate the querying performance of the implementation with data of different sizes, generated by the LDBC Social Network Benchmark. We discuss the results in Sect. 9.5 and present threats to their validity in Sect. 9.6.

9.1 Implementation

Our implementation of InTempo is based on the Eclipse Modeling Framework (EMF) [36, 101], which is a widespread MDE technology for creating software systems. For pattern matching, we use the Story Pattern Matcher [48] using the search plan generation strategy presented in [3]. The Matcher uses local search to start the search from a specific element of the graph and thus reduces the pattern matching effort [64]. It uses an OCL checker for checking attribute constraints. For computations on intervals we use an open-source library [52]. For the removal of elements from the runtime model, we transparently replace the native EMF method, via a JAVA agent, with an optimized version which reduces the potentially expensive shifting of cells in the underlying array list and renders the removal more efficient. The implementation is available as an EMF plugin in [81]. The plugin relies on two domain-specific languages for the generation of an event mapping (see design-time artifacts in Sect. 3) and the specification of temporal graph queries in \(\mathscr {L}_\mathrm {T}\). For the evaluation, we developed two variants based on the plugin: IT, with pruning disabled, and IT\(_{\tiny {\text {+P}}}\), with pruning enabled.

9.2 Runtime adaptation of smart hospital system

We developed a simulator of the adaptable SHS presented in Sect. 8. The simulation replays events on an RTM\(^{\mathrm{H}}\) based on the real and synthesized event logs described in Sect. 9.2.1. After each event, the temporal graph queries MG1 and MG2, described in Sect. 8.2, are executed. Matches constitute adaptation issues which are resolved by appropriate modifications to the RTM\(^{\mathrm{H}}\).

9.2.1 Input logs

The log used in our experiments (in the following, real log) contains 1049 trajectories of sepsis patients admitted to a hospital within 1.5 years [78]. Each trajectory comprises a sequence of events. The events that are relevant to the case-study are ER Sepsis Triage (ER), IV Antibiotics (IV), and Release (RE) events. A trajectory starts with an ER event, and IV and RE events might follow. The inter-arrival time (IAT) between two ER events defines the arrival rate of trajectories. We used statistical probability distribution fitting to find the best-fitting distribution that characterizes the inter-arrival times between: two ER events (IAT\(_T\)), an ER and an IV (IAT\(_{SA}\)), and an ER and an RE (IAT\(_{SR}\)). Then, we used statistical bootstrapping [33] to generate two synthetic logs, x10 and x100, with IAT\(_T\) values that are 10 and 100 times smaller, respectively, than IAT\(_T\) values of the real log. IAT\(_{SA}\) and IAT\(_{SR}\) remain as in the real log.

As a result, x10 and x100 cover the same period of time as the real log, and increase the trajectory density (approx.) 10 and 100 times, respectively, allowing us to test the scalability of InTempo without altering the statistical characteristics of the real log. The logs and a detailed description of the statistical methods employed are available in [93].

Table 1 Overview of input logs: the number of events is mapped to the number of vertices and edges created in the model; the column Deleted shows the percentage of deletions for vertices and edges contained in the logs

Each event in the logs corresponds to the creation of certain elements in the RTM\(^{\mathrm{H}}\). In order however to evaluate the performance of pruning we required that the lifespans of these elements have an end, i.e., their dts is set. This information is not provided in the original log. Therefore for each created element we defined an interval after which a delete event was injected in the logs. The intervals for Probe and Service instances are 10 seconds and one hour, respectively. The logs that contain the deletions are available in [96]. An overview of the logs is shown in Table 1. As shown by the Deleted column, all created elements are eventually deleted.

9.2.2 Experiment design

We integrated each variant in an adaptation engine. We denote this integration by an arrow circle: IT\(^{\circlearrowright }\), includes the Monitor, Analyze, Plan, and Execute activity—in terms of InTempo, the operations Operationalization and Execution—and IT\(^{\circlearrowright }_{\tiny {\text {+P}}}\), includes all activities above plus Maintenance, i.e., Operationalization, Execution, and Maintenance of InTempo. See Fig. 12 for an overview. The experimentsFootnote 3 simulate the events in the real, x10, and x100. Each experiment entails the execution of one variant for one query, either MG1 or MG2.

We measure IT \(^{\circlearrowright }\) and IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) with respect to their reaction time (or full loop time). In this context, the reaction time is equal to the required time for one execution of the adaptation loop, i.e., the time from when an issue is detected to when a corresponding adaptation action has been performed. Thus the reaction time consists of times for Analyze, Plan, Execute and, for IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\), Maintain. The time required for Analyze is effectively the query execution time. Time measurements are used to assess the requirement for fast query executions (R3). In each adaptation loop, we measure the memory consumed by the variants based on the values reported by the JVM. Memory consumption is used to assess the requirement for memory-efficiency (R4). The time spent in Monitor, i.e., processing an event, is negligible and thus not reported.

A loop is invoked periodically based on a predefined but modifiable frequency. In our experiments, based on the IAT\(_T\) of the logs, we set the invocation frequency to one hour, i.e., 3600 seconds, to avoid frequent invocations where no events are processed. The invocation frequency coincides with the maximum delay of a violation detection, i.e., in the worst case, a violation will occur at the first second after the loop and will be detected at the next invocation which in this case is after almost one hour. Operationalization, which produces the temporal GDN as well as the time window utilized by Maintenance of IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\), is executed only once during the setup of the loop.

Each experiment is measured for either time or memory and proceeds as follows. First, during Monitor activity, events from the logs are processed. Each log event corresponds to certain modifications to the RTM\(^{\mathrm{H}}\): an ER Sepsis Triage event corresponds to the addition of a PMonitoringService and a Probe instance with status set to sepsis to the model; an IV Antibiotics event to the addition of a DrugService instance and a Probe instance with status anti; a Release event is similar to the ER Sepsis Triage, except the status is set to release. The loop is invoked at the predefined intervals, triggering the Analyze activity which executes the query. Matches constitute adaptation issues. During Plan and Execute transformations are performed which correspond to adaptation actions. Finally, for IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\), Maintain is performed and matches are recomputed.

9.2.3 Results

Figures 14 and 15 depict the cumulative time (in logarithmic scale) for each of the measured loop activities and the reaction, i.e., full loop, time for MG1 and MG2.

As expected, the results are mainly influenced by the Analyze activity, which is when issues are detected, i.e., queries are executed. The number of processed events in the experiments with the log files real, x10, and x100 grows steadily—see Sect. 9.2.1. For IT \(^{\circlearrowright }\), this increase is mirrored in the size of the RTM\(^{\mathrm{H}}\). The Analyze time of IT \(^{\circlearrowright }\) increases with respect to these two parameters. The growth of the RTM\(^{\mathrm{H}}\) can also be seen in Table 3, where the maximum memory measurement is reported for both variants. Contrary to IT \(^{\circlearrowright }\), the pruning in IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) minimizes the size and therefore the memory consumption of the RTM\(^{\mathrm{H}}\). In fact, because the rate of created elements per period in each log does not increase and, over time, it is almost equal to the rate of deleted elements, the memory consumption over real, x10, and x100 remains unchanged.

Fig. 14
figure 14

Cumulative time of loop activities for MG1

Owing to pruning, the Analyze time of IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) increases at a considerably smaller pace compared to IT \(^{\circlearrowright }\). Note that pruning forces a re-computation of the results. Therefore, as shown in Figs. 14 and 15, the time it requires is non-negligible. Figure 16 shows the time for Analyze for each loop of the two variants for the x100 log (in logarithmic scale). The pruning of RTM\(^{\mathrm{H}}\) enables the analysis time of IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) to remain constant.

9.3 Comparison to state-of-the-art

During the Analyze activity, Execution, i.e., the operation of InTempo, finds matches by checking whether a temporal property, i.e., an MTGC, is violated by the RTM\(^{\mathrm{H}}\). This functionality resembles the objective of RV, where an online algorithm verifies whether a sequence of events representing a system execution violates a temporal property. The algorithm is required to maintain an (internal) representation of the history of the execution, similar to the RTM\(^{\mathrm{H}}\). We therefore use the state-of-the-art RV tool MonPoly to acquire a baseline for the performance of IT \(^{\circlearrowright }\) and IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) in detecting issues during the activity.

RV tools are typically not intended for usage with structural models. In MDE, storing a structural model, i.e., a graph, may be achieved by a graph database. However, only few graph databases support a notion of time in the representation, the query specification, and query execution, e.g., the database presented in [59]. None of the time-aware databases supports incremental execution of queries with complex patterns, which renders them sub-optimal for an online setting where the result set is updated after each change to the model.

Hawk is a model indexer, i.e., a solution which monitors file-based repositories such as Git or SVN, stores models or model elements of interest to a database, and maintains an index, i.e., an efficient representation, of the model evolution which is amenable to model-element-level querying [4]. Hawk has been recently extended to integrate the time-aware database from [59] and to support temporal primitives which can be used to query the history of a model directly. Using these primitives, we formulate the queries MG1 and MG2 and compare the performance of Hawk to that of InTempo.

9.3.1 Runtime verification with MonPoly

MonPoly [9, 10] is a command-line tool which notably combines an adequately expressive specification language with an efficient incremental monitoring algorithm. It has been the reference in evaluations of other RV tools [34, 60] and among top-performers in an RV competition [8]. Its specification language is based on the Metric First-Order Temporal Logic [9] (MFOTL) which uses first-order relations to capture system entities and their relationships. MonPoly processes a sequence of timestamped events, maintains an internal representation of the system execution, and checks whether it violates a given formula. Unlike an RTM, representations in RV tools are created ad hoc and they are pruned by default, i.e., they contain only the data that is relevant to the formula and has not been checked yet.

Fig. 15
figure 15

Cumulative time of loop activities for MG2

Fig. 16
figure 16

Time for Analyze activity per variant (MG1 - x100)

The semantics of MFOTL are point-based, i.e., the logic assesses the truth of a formula only at the time points of the events in a sequence and not for the entire time domain as interval-based logics such as MTGL. Therefore, a result in MFOTL is not accompanied by a temporal validity as in InTempo. Furthermore, for certain types of formulas, point-based semantics may yield counter-intuitive results which disagree with interval-based semantics—on the other hand, it may allow for more efficient monitoring algorithms compared to those based on interval-based semantics [see [11]. Although the difference in interpretations may affect more extensive evaluations, it does not affect the conditions of the queries discussed in Sect. 8.3.

Encoding a graph pattern in MFOTL requires an explicit definition of the expected (temporal) ordering of the events that corresponds to the order of creation of the elements in the simulation. To emulate pattern matching, we would therefore have to specify an MFOTL formula that would consider all possible events of interest as a start for matching the pattern and then search in the past of the execution or in the present for the rest of the events of interest. Leveraging the knowledge of the actual order in which events occur in the simulation, we simplify the formulas for MonPoly by specifying only the correct ordering. This creates an advantage for MonPoly in the comparison with our implementation which we deem is justified as MonPoly is not intended for pattern matching. For ensuring that a pattern matches only entities with overlapping lifespans, we use a construction suggested by the MonPoly authors [9]: for a creation event c(a) and a deletion event d(a), we encode the lifespan of the entity a by \(\lnot \exists \, d(a) \, \text {S}_{[0,\infty )} \, \exists \, c(a)\).

Since the output of MonPoly focuses on a violation, forward-looking matching, i.e., matching a relation in the past and subsequently searching for other relations in its future, would not produce the desired result as it would always only output the time point the first violating relation was matched.

We map MG1, i.e., \((g_{1}, \lnot \psi _{MG1})\) with \(\psi _{MG1}\) the MTGC \(\lozenge _{[0,3600]} \exists \, g_{1.1}\) (see Sect. 8.3), in a straightforward manner to its MFOTL equivalent, i.e., \(g_{1}\) is enclosed by an existential quantifier, other operators remain intact, and relations are used instead of patterns. This is not possible for \((g_{1},\lnot \psi _{MG2})\) with \(\psi _{MG2} :=\lnot \exists g_{1.2} \,\mathrm {U}_{[0,3600]} \exists \, g_{1.1}\) however, as MonPoly restricts the use of negation in this case. It does so for reasons of monitorability, as the tool assumes an infinite domain of values, and the negation of \(g_{1.2}\) at a given time point when it does not exist is satisfied by infinite values and is therefore non-monitorable by MonPoly—note that the use of negation is unrestricted for since, as with the lifespan construction from above. In the following, we compare to MonPoly only for MG1. The translations in MFOTL are shown in Appendix C—see Listing 1 and Listing 2.

9.3.2 Indexing and querying an RTM\(^{\mathrm{H}}\) with Hawk

Hawk [35, 44] integrates a time-aware graph database [59] which tracks changes between (timestamped) repository commits and, therefore, equips Hawk with the capability of querying the history of a model. Hawk represents history by versions of types and type instances. A new version of a type is created every time a type instance is created or deleted; the initial version of a type has no instances and types are never removed from the indexer. A new version of an instance is created every time one of its features changes; instances are removed from the indexer when they are deleted from the model. Versions are timestamped based on the timestamp of the repository commit that created the version in question.

Hawk formulates queries in the Epsilon Object Language (EOL), which draws from the well-known Object Constraint Language (OCL) [90]. Hawk extends EOL with supports for temporal primitives such as time, getVersionsFrom(\(\tau \)), eventually, which enable retrieving the timestamp of a version, obtaining a specific collection of versions based on their timestamp \(\tau \), or making assertions over a collection of versions. EOL supports methods native to EMF which obtain the container of an instance or the contents of one of its features.

The query MG1 is translated in EOL by obtaining a set \(c_1\) with all Probe instances with status set to sepsis, created within a certain time window. Subsequently, we obtain a collection \(c_2\) with the container of all instances in \(c_1\) (the instance of PMonitoringService). For each instance in \(c_2\), we obtain its container (the instance of SHSService). We collect all contents of the connected feature of the SHSService, i.e., all the connected services, that are of type DrugService in a collection \(c_3\) and, for each instance in \(c_3\), we check whether the contents of its probes feature include a Probe instance with status set to anti, whose timestamp satisfies the temporal constraints. Query MG2 is identical except it also checks whether an instance of Probe with status release exists in the period between an instance with status anti and an instance with status sepsis. The queries are available in Listing 3 and Listing 4 in Appendix C.

9.3.3 Input logs

Regarding MonPoly, we encoded the SHS metamodel by relations, following generally standard practices [see [88]. We translated all simulated logs, i.e., real, x10, x100, into sequences of events based on this encoding. An overview of the translated logs is shown in Table 1—translations are prefixed with an “M-.” The logs are available in [96].

Hawk supports models created in EMF which allows us to re-use the SHS metamodel in Fig. 2 as well as a part of our implementation to map events in the log files real, x10, and x100 to model modifications. The modifications are identical to those created for InTempo.

9.3.4 Experiment design

MonPoly processes the events in the file and, for each event, updates its internal representation of the system behavior and its result. As explained earlier, the representation only retains data which are (temporally) relevant to the formula. This algorithm resembles the experiments for InTempo and, therefore, MonPoly is executed only once per experiment. Each experiment entails the execution of the tool with one translated log and the property MG1—as MG2 cannot be monitored by MonPoly. The latest MonPoly version at the time of writing is used (1.1.10) and run on the same machine as the implementation variants. For measuring the memory consumption and execution time, we use the output generated by MonPoly.

For Hawk, the experiments proceed similarly to those conducted for InTempo—see Sect. 9.2.2. Each experiment entails checking either MG1 or MG2 and is measured for either time or memory. Following the processing of an event, i.e., model modifications, the model is saved as an XMI file, i.e., the standard EMF file extension, and committed to a programmatically created Git repository. The timestamp of the commit is set to the timestamp of the event. Hawk is invoked periodically at the same time points of the loop invocations of the InTempo variants. In every invocation, Hawk is first requested to update its index and then to execute the query. Given the periodic invocation, queries obtain only the necessary versions of Probe such that no adaptation issues are missed. That is, the translations use the primitive getVersionsFrom(\(\tau -2{\mathcal {W}}\)), which retrieves all instances from \(\tau \) up to \(2{\mathcal {W}}\), with \(\tau \) the timestamp of the latest change and \({\mathcal {W}}\) the time window —see Sect. 6.1. The latest Hawk version at the time of writing is used (2.2.0) and run on the same machine as InTempo. Query execution time and memory consumption are measured similarly to InTempo. The time required for saving and committing the XMI file is omitted.

Table 2 Issue detection time (cumulative) for MG1 and MG2 (secs - rounded)

9.3.5 Results

The results for the issue detection time (in seconds) for the three logs are shown in Table 2. The issue detection time refers to the amount of time each tool or variant requires to produce the correct result: for IT \(^{\circlearrowright }\), this is only the sum of the times for Analyze, i.e., the query execution time, for every invocation; for IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\), however, it is the sum of Analyze and Maintain for every invocation. Similarly, the sum of the indexing and the querying time for every invocation is reported for Hawk. The execution of Hawk for x100 was stopped after almost three days, hence no results are reported.

Issue detection with MonPoly is faster than IT \(^{\circlearrowright }\) for real and x10. However, MonPoly is slower than IT \(^{\circlearrowright }\) for x100. IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) outperforms MonPoly for all logs. The reason is that, as shown in Table 3, pruning significantly reduces the size of the model and, therefore, its memory consumption. As a result the time spent for pattern matching is decreased. Hawk is the slowest in detecting issues as well as the most costly in terms of memory. The size of Hawk ’s database on disk was deemed irrelevant and is not reported.

Table 3 Memory consumption (max) for MG1 and MG2 (MBs)

Figure 17 (in logarithmic scale) shows the speedup of IT \(^{\circlearrowright }\) over Hawk , i.e., (hk/it), with hk the issue detection time of Hawk for an invocation and it the issue detection time of IT \(^{\circlearrowright }\) for the same invocation. The speedup value of 1 is marked by a dashed line. IT \(^{\circlearrowright }\) was faster than Hawk in all invocations except the plot points below the dashed line—which amounts to approx. 1.2% of the total number of invocations. Figure 18 shows the speedup of IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) over Hawk. Hawk was always slower than IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\). In fact, their difference increases as the simulation proceeds, as pruning in IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) decreases the size of the RTM\(^{\mathrm{H}}\). Plot points where the speedup of IT \(^{\circlearrowright }\) is larger than the speedup of IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) are invocations in which query execution of IT \(^{\circlearrowright }\) took a very small amount of time, i.e., less than a millisecond; although the query execution time in these invocations was similar for IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\), the time spent in pruning added an overhead which reduced speedup. MonPoly only outputs the cumulative time required for monitoring the log; the time per event is not reported. Hence, a speedup comparison was not possible.

9.4 LDBC social network benchmark

The Social Network Benchmark (SNB) from the Linked Data Benchmark Council [75] is designed to simulate a plausible social network in operation. The benchmark can generate data of varying sizes and provides a series of realistic usage scenarios which aim at stress-testing and discovering bottlenecks in graph-based technologies. The latest version of the benchmark at the time of writing (0.4.0) generates data which contain both insert and delete operations [108] and can be conveniently transformed into a stream of timestamped creation and deletion events.

Fig. 17
figure 17

Speedup of issue detection time of IT \(^{\circlearrowright }\) over Hawk (MG1 - x10)—values below the dashed line indicate invocations for which IT \(^{\circlearrowright }\) was slower than Hawk

9.4.1 Metamodel and queries

The SNB metamodel consists of a static part, i.e., the entities City, Country, Tag, TagClass, University, and Company whose instance creations precede the creation of the network and are never deleted, and a dynamic part, i.e., the entities Person, Post, Comment, and Forum, whose instances are created during the operation of the network and can be deleted. A relevant excerpt of the SNB metamodel is shown in Fig. 19—where entities in gray are explained later in this section. In the generated data, forum memberships and friendships are represented by links between entities. The vast majority of the network activity comprises persons joining forums, befriending other persons, or posting comments and replies in forums.

From the available queries in the SNB specification, we select two queries with a temporal dimension, namely IC4 and IC5 [39]. The (slightly adjusted) query IC4 reads: “Given a start Person, find Tags that are attached to Posts that were created by that Person’s friends. Only include Tags that were attached to friends’ Posts created within a year after they became friends with the start Person, and that were never attached to friends’ Posts created before that.” Similarly to the SHS case-study, statements in the query are captured as patterns—shown in Fig. 20.

The query refers to the point in time a friendship was created. Executing the query would entail checking the creation and deletion timestamp of the link that represents the friendship. InTempo does not directly support attributes in links. Following a customary modeling technique, e.g., [70], links of interest can be encoded as vertices. The links that represent a friendship and a forum membership are relevant to IC4 and IC5 and have been modeled as vertices with a creation and a deletion timestamp in Fig. 19—see KnowsLink and hasMemberLink vertices.

Based on these patterns, we gradually compose the query in \(\mathscr {L}_\mathrm {T}\) for IC4. Note that the naming scheme of the patterns is based on their nesting level and that time is assumed to be tracked in seconds. We search for Tags and Persons (\(q_{1.1}\)) that satisfy the following three conditions simultaneously. First, they are friends with the start Person (\(q_{1.1.1}\), where the start Person is a user input captured by the pattern constraint). For this condition, it is additionally required to locate the first time point where the friendship was created. In MTGL, this may be achieved by the construction \(\exists (q_{1.1.1}, \lnot \blacklozenge _{(0,\infty )} \exists \, q_{1.1.1.1})\), where we make use of the knowledge that the HasMemberLink will be the last vertex created in \(q_{1.1.1}\), i.e., after the two Persons. The second condition requires friends to have posted Posts with Tags where the Posts’ first time point (moment of creation) was within the last year :\(\blacklozenge _{[0,1y]} \exists (q_{1.1.2}, \lnot \blacklozenge _{(0,\infty )} \exists \, q_{1.1.2.1})\)—note that we abbreviate the number of seconds in a year by “1y.” Finally, these Tags have never been attached to Posts of the start Person by any other friend: \(\lnot \blacklozenge _{[0,\infty )} \exists (q_{1.1.2.2},\exists \, q_{1.1.2.2.1})\).

The construction locating the first time point a pattern occurs, that is, \(\exists (q_{1.1.1},\lnot \blacklozenge _{(0,\infty )} \exists \, q_{1.1.1.1})\), uses an unbounded temporal operator which requires that InTempo stores the entire history and effectively disables pruning. As mentioned earlier, the lifespan of a match is always an interval, i.e., a connected set of time points. This characteristic makes the first time point when a match occurs unique, i.e., there can be no two first time points in the past. Therefore, in this particular construction, it is unnecessary to check the sub-condition over the entire history. Instead, the interval of the operator can be reduced to a minimal interval, i.e., \(\blacklozenge _{(0,1)}\), which returns the same result while allowing InTempo to avoid storing the entire history. In the following, we abbreviate this construction by the operator exists-first: \(\exists (q_{1.1.1}, \exists ^f q_{1.1.1.1})\).

Fig. 18
figure 18

Speedup of issue detection time of IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) over Hawk (MG1 - x10)

Fig. 19
figure 19

Relevant excerpt of the metamodel of the LDBC Social Network Benchmark

In summary, in \(\mathscr {L}_\mathrm {T}\), IC4 is captured by the query \(\text {IC4}:=(q_1, \psi _{IC4})\) with \(\psi _{IC4}\):

$$\begin{aligned} \begin{aligned}&\; \; \exists (q_{1.1}, \exists (q_{1.1.1}, \exists ^f q_{1.1.1.1})\\&\wedge \blacklozenge _{[0,1y]} \exists (q_{1.1.2}, \exists ^f q_{1.1.2.1} \\&\qquad \quad \wedge \lnot \blacklozenge _{[0,\infty )} \exists (q_{1.1.2.2}, \exists \, q_{1.1.2.2.1}))) \end{aligned} \end{aligned}$$
(14)

The patterns for IC5 are shown in Fig. 21. The (slightly adjusted) query reads: “Given a start Person, find the Forums which that Person’s friends and friends of friends (excluding start Person) became members of in the two months before the friendship was created. Return all Posts in the Forums created by the start Person’s friends or friends of friends within that period.”. In \(\mathscr {L}_\mathrm {T}\) it is captured by the query \(\text {IC5}:=(p_1, \psi _{IC5})\) with \(\psi _{IC5}:\)

$$\begin{aligned} \begin{aligned}&\;\;\;\;(\exists p_{1.1} \vee \exists (p_{1.2}, \exists p_{1.2.1})) \\&\wedge (\blacklozenge _{[0,2mo]} \exists (p_{1.3}, \exists ^f p_{1.3.1})) \end{aligned} \end{aligned}$$
(15)

Note that the number of seconds in two months has been abbreviated by “2mo.”

9.4.2 Input logs

We have used the SNB ’s data generator to generate data for two scale factors: sf-0.1 and sf-1, which create a network of 1.5K and 11K Persons, respectively. In total, 328K nodes and 1.5M edges are created in sf-0.1, and 3.2M nodes and 17.3M edges in sf-1. The generated data span a period of 10 years, from 2010 to 2020. We have captured the creation and deletion timestamps of insert and delete operations into log files of timestamped events. For our experiments, forum memberships and friendships are also encoded as nodes and, therefore, the relevant inserts and deletes in the generated data are similarly represented by events which create or delete instances of HasMemberLink and KnowsLink instances, respectively. This brings the total of nodes and edges created by the log to 900K and 3.4M, respectively, for sf-0.1. The log for sf-1 creates 10.2M nodes and 38.4M edges—see the overview in Table 1. The logs are available in [96].

Fig. 20
figure 20

Graph patterns used for IC4, where $input denotes an input parameter provided to InTempo

The generated data contains two stages: the operational stage which entails the creation of the entire network and a small percentage of deletions, spanning from 2010 to 2013, and the shutdown stage which contains only deletions and destroys the network, spanning from 2013 to 2020. We have added a start-up stage to the beginning of the log which creates the static part of the network, e.g., Tags and Countries, at the beginning of the epoch (timestamps 0 to 7).

9.4.3 Experiment design

We envision a scenario where the results from queries IC4 and IC5 are utilized to provide a member of the network with dynamic recommendations or warn the member for suspicious behavior in their network when the member logs in. Recommendations can be built based on the returned Tags from IC4 and warnings can detect abnormally many new memberships by new friends, returned by IC5. Therefore, in our experiments we execute the queries periodically on each (simulation) day, which simulates a daily login on the network by the member.

According to the typical SNB execution scenario [75, p. 25], we first process a large number of operations (the first 35 months) of the operational stage such that a large starting RTM\(^{\mathrm{H}}\) has formed before the queries are executed. We call this the initial phase of the experiment. The initial phase corresponds to roughly 800K events in sf-0.1 and 8.8M events in sf-1. The queries are executed once per day for the remaining month in the operational stage. After the operational stage, the shutdown stage comprises numerous bulk deletions which span the remaining period, i.e., 7 years, and destroy the network. This would not constitute a realistic setting for the scenario, as only deletions would be processed. Hence, the experiments only run until the beginning of the shutdown stage. The percentage of the elements that are deleted in the operational stage is shown in Table 1.

Fig. 21
figure 21

Graph patterns used for IC5

Table 4 Query execution time (cumulative) for IC4 and IC5 (secs - rounded)

We evaluate the performance of IT (no pruning) and IT \(_{\tiny {\text {+P}}}\) (with pruning). IT \(_{\tiny {\text {+P}}}\) is only executed for IC5, as IC4 refers to the entire history and, therefore, contains an unbounded operator. Depending on the log, each query is executed for a different start person. This was done to ensure that the start person would be actively involved in query executions. To choose the start person, we created a list that sorted persons in the network according to their number of friends (larger to smaller) and randomly picked a person from the top half. The log sf-0.1 is executed for the person with id=483 and sf-0.1 for the person with id=361.

In the initial phase, before the periodic execution begins, the temporal GDN is populated with all matches in the starting graph. The first execution of the periodic phase updates these matches. Each variant is executed for each input log and is measured for either query execution time or memory consumption. Experiments which measure time were executed 10 times and the average values are reported. The experiments are conducted on the same workstation as all other experiments.

Our attempt to evaluate MonPoly and Hawk with the SNB logs resulted into considerable practical difficulties. In the SHS case-study, we optimized pattern matching by MonPoly by arranging the ordering of relations in the MFOTL property, leveraging the knowledge on the order in which these events occurred in the simulation. Applying the same optimization for SNB was impossible as there was no fixed order in which some patterns could occur, e.g., the friendship and the membership in \(q_{1.1.2.2}\). Therefore, the MonPoly properties would have to feature many alternative orderings which would deteriorate the performance of the tool. For Hawk, the initial phase of the experiment required generating and indexing approx. 800K and 8.8M (large) models for sf-0.1 and sf-1, respectively. Saving these models as XMI files and indexing them was quite slow: only a few tens of thousands of models had been processed after several hours. These difficulties indicated that the tools were not meant for this setting and using them nonetheless would compromise the evaluation conclusions. Therefore, the tools were excluded from the SNB experiments.

Table 5 Memory consumption (max) for IC4 and IC5 (MBs)

9.4.4 Results

Table 4 shows the query execution times. The initial phase for IC4 and IC5 over sf-0.1 lasted approx. 18 and 39 seconds. Over sf-1, the initial phase for IC4 and IC5 lasted 145 and 363 seconds, respectively. The effect of pruning in IT \(_{\tiny {\text {+P}}}\) is marginal as only a small number of deletions, i.e., less than 5%, occur in the log—see Table 1. On the other hand, the overhead of pruning is also reduced. Given that no pruning occurs for the first 35 months, a relatively lengthy pruning takes place after the first execution in every experiment. For instance, the first pruning for sf-1 lasts approx. 47 secs. The average duration of all other executions of pruning is 287ms.

Figure 22 shows the query execution times for IT and IT \(_{\tiny {\text {+P}}}\) in detail, as well as the number of relevant nodes (in thousands) added per period in sf-1: relevant nodes are those included in the patterns of the temporal GDN nodes for IC5, i.e., instances of HasMemberLink, Person, KnowsLink, and Post. Generally, larger execution times correspond to rounds with a larger number of relevant nodes. On average, the InTempo variants handled 15K additions of relevant nodes per period over sf-1. The average query execution time was 5.7 secs with IT and 5.8 secs with IT \(_{\tiny {\text {+P}}}\) for IC5.

Table 5 shows the memory consumption of the two variants. Due to the small number of deletions, the memory consumption of IT \(_{\tiny {\text {+P}}}\) is only slightly decreased. We measured that only loading the RTM\(^{\mathrm{H}}\) in memory consumed 2.6GBs for sf-0.1 and 44.4GBs for sf-1. The rest of the memory in Table 5 was consumed by intermediate results maintained by the nodes of the temporal GDN.

9.5 Discussion

We discuss the evaluation results with respect to the requirement for fast query execution times (R3) and memory-efficiency (R4), from Section 1. In light of the evaluation, we conclude the discussion with remarks on advantages and limitations of InTempo.

Fig. 22
figure 22

Query execution time (IC5 - sf-1)

9.5.1 Fulfillment of requirements

We presented a method to constrain the history represented in the RTM\(^{\mathrm{H}}\) (Sect. 6) where we assume that matches are handled as they are returned and can therefore be removed from the RTM\(^{\mathrm{H}}\). The assumption is justified by relevant use-cases in RV and self-adaptation. We assess the requirement R3 for fast query executions by (i) a comparison between the issue detection times of IT \(^{\circlearrowright }\), that used an un-constrained representation, and IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\), a variant that used a constrained one (ii) a comparison to the issue detection times of two state-of-the-art tools: the RV tool MonPoly, with which we emulated pattern matching with relations, and the model indexer Hawk. By issue detection time, we refer to the time required to return correct results, i.e., query execution time for IT \(^{\circlearrowright }\), query execution time plus pruning time for IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\), indexing time plus query execution time for Hawk, and execution time for MonPoly.

Both IT \(^{\circlearrowright }\) and IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) outperform Hawk and IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) also outperforms MonPoly. IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) is slower than IT \(^{\circlearrowright }\) and MonPoly, however it operates on an un-constrained representation, and it is therefore capable of computing the precise validity of an answer over the entire history at any point in time. Hawk, which shares this ability, is slower than IT \(^{\circlearrowright }\). We can therefore conclude that query answers from InTempo are provided fast with respect to the state-of-the-art. Issue detection times were also relatively fast with respect to the invocation frequency in the SHS case-study. Although stronger claims on this aspect require further investigation, these times indicate that InTempo may serve as a basis for more sophisticated adaptation approaches.

We assess the requirement for memory-efficiency (R4) by a similar comparison. IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) consumes smaller amounts of memory than IT \(^{\circlearrowright }\) while delivering the same results over the entire history. Therefore, IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) is more memory-efficient compared to a variant using an un-constrained representation. Moreover, IT \(^{\circlearrowright }_{\tiny {\text {+P}}}\) is more memory-efficient than Hawk, as well as than MonPoly for larger logs, i.e., x10, x100.

Except for serving as a baseline for the variant with pruning, IT \(^{\circlearrowright }\) covers other important use-cases where pruning might be infeasible or undesirable. For instance, the queries of interest might change often and thus time windows cannot be derived a priori—as historical data might be relevant to another query in the future. Another example is when the incurred cost of pruning on the loop execution time is undesirable. Finally, a third possible scenario is postmortem analysis like self-explainability in self-adaptive systems [14] where the system is required to explain adaptation decisions in its entire history—for our example, IT \(^{\circlearrowright }\) is faster and more memory-efficient than Hawk, which is intended for such use-cases [45]. It should be noted that, given the database back-end and the offline use-cases, minimizing memory consumption may not be a primary focus for Hawk. Moreover, by storing versions of types and type instances, Hawk implicitly stores the history of types, links, and attribute values, which would require a manual encoding in InTempo.

The SNB log files allowed for a more extensive evaluation of InTempo as they involved more complex, realistic, and considerably larger graph structures. \(\mathscr {L}_\mathrm {T}\) is used for more complex queries than those in the SHS case-study which stem from a domain which is significantly different to healthcare. The execution of InTempo for two scale factors of the benchmark indicates that, for the specific experimental setting, the query execution time of InTempo scaled adequately with respect to the smaller and structurally simpler datasets in the SHS case-study. This evaluation allows for indirect comparisons with other tools in the future—as well as future versions of InTempo.

A GDN stores all intermediate query results. Only nodes that are affected by model modifications are executed which, combined with local search, makes for an incremental evaluation framework which is optimized for fast query executions. This feature is emphasized in the SNB experiments where the structure of the RTM\(^{\mathrm{H}}\), in contrast to the SHS case-study, is amenable to local search. Pattern matching may skip big parts of the RTM\(^{\mathrm{H}}\) that are unaffected and therefore, in relation to the number of events and added nodes (see Table 1), the query execution times were considerably fast.

On the other hand, given the size of the patterns in SNB queries as well as the number of matches, storing all intermediate results has an adverse effect on memory consumption. The effect was aggravated by temporal GDNs featuring many \(\mathcal {Z}\)MR nodes, i.e., negation, conjunction, since, and until, whose LHS currently contains the maximal context required by parent nodes. Effectively, these nodes re-execute queries that have been already executed and re-store their results. For instance, IC5 features eight such nodes, with some of them redundantly storing matches of elements which are in great quantities in the data, e.g., Posts.

9.5.2 Advantages and limitations

Regarding the issue detection by an RV tool like MonPoly, we note that, although similarities in the use-cases of RV can be observed, so can fundamental differences. InTempo relies on an RTM , i.e., a causally connected snapshot of the system constituents and their state, which we assume to be the product of a broader MDE context. Queries over the RTM (and their answers) are supposed to be further utilized at once within that context—as the self-adaptation application scenario in Sect. 8 demonstrated. Models are typically not the primary focus of RV tools; contrary to an RTM, representations of the system state are created ad hoc and are typically inaccessible by other tools or end-users, which may render RV tools impractical for use in an MDE context, e.g., it may hinder synergy with other model-based technologies.

Moreover, as mentioned previously, structural RTMs are amenable to a graph-based encoding. Transferring this setting to an adequately expressive RV representation like relations resulted into overly technical and error-prone encodings as, typically, RV tools do not focus on structural representations. Consequently, emulating pattern matching of queries which were rather structurally simple was also cumbersome, even after optimizations. Translating one of the properties which concerned the prohibition of the existence of a pattern into MFOTL was not possible due to the syntax restrictions of MonPoly. These leads indicate that MonPoly and similar RV tools are sub-optimal for graph-based representations, queries and, in turn, structural adaptation of systems.

In the presented experimental setting, InTempo outperformed Hawk and MonPoly. Nonetheless, the level of maturity of the two tools surpasses the prototypical implementation we present. Hawk supports various modeling technologies, including EMF. It also seamlessly supports history of attributes, links, and types. Although its database back-end is expected to yield slower access times than an in-memory history representation, it offers other advantages, e.g., increased scalability with respect to the size of the models, which may benefit other settings but were not explored in our evaluation. Both Hawk and, to a lesser extent, MonPoly, support monitoring the evolution of a value, e.g., aggregating the value of an attribute for each query execution and monitoring whether the sum exceeds a limit. We plan to address this limitation by equipping the temporal GDN with auxiliary nodes used to encode input parameters as constraints that are checked during pattern matching, as shown in our previous work [100].

As demonstrated by the SNB experiments, InTempo requires that attributes, types, and links whose evolution is of interest are encoded as vertices. For the SNB, this meant that two links were encoded as vertices, which roughly tripled the number of vertices in the RTM\(^{\mathrm{H}}\) compared to the model normally created by sf-0.1 and sf-1. Although this is a customary modeling technique for EMF-based implementations, other tools, such as Hawk, handle the encoding in a manner transparent to the user.

Originally, the creation timestamp in SNB is captured by a vertex or edge attribute to which queries may refer directly. Therefore, the original queries IC4 and IC5 contain timing constraints based on the physical time, e.g., “before October 2010.” We adjusted these constraints to logical time, e.g., “in the last two months,” as physical time references are not currently supported by \(\mathscr {L}_\mathrm {T}\)—and are typically not provided by logic-based languages, e.g., MFOTL. Hawk is able to express such constraints referring to physical time. A related limitation is that InTempo currently cannot relate the current time point, i.e., when the query is executed, with temporal computations by the temporal GDN. Hence, InTempo performs an additional check (predicate \({\mathcal {P}}\)) to assess whether query results are conclusive with respect to the current time point. This check is standard in RV and hence seamlessly provided by MonPoly. We plan to integrate the check in InTempo by means of the auxiliary nodes mentioned above.

Queries in SNB draw from SQL-based languages and their statements resemble SQL queries. Their translation into a temporal logic like MTGL required a certain level of familiarity with temporal logics and resulted into non-trivial MTGCs. We plan to equip \(\mathscr {L}_\mathrm {T}\) with constructions that facilitate such translations, e.g., the exists-first abbreviation we introduced in Sect. 9.4. However, there are certain features of SQL-based languages which are typically not offered by logic-based languages, e.g., aggregations and limiting or sorting of results—MFOTL stands out as it offers the capability of numerical operations such as aggregation. Related aspects of the SNB queries have been omitted from our formulation of IC4 and IC5 in \(\mathscr {L}_\mathrm {T}\).

The SNB experiments indicated that larger temporal GDNs may require a significant amount of memory. We plan to investigate whether the patterns used for \(\mathcal {Z}\)MR nodes can be optimized so that memory consumption is reduced.

9.6 Threats to validity

We organize this section based on the types of validity in [111], i.e., conclusion, internal, construct, and external.

Conclusion Validity Threats to conclusion validity refer to drawing incorrect conclusions about relationships between an experiment and its outcome, for instance, by reporting a non-existent correlation or by missing an existent one. We mitigated the possible impact of threats to conclusion validity by carefully selecting the experimental data; the log files used were either real (real log in the SHS evaluation), synthesized based on real data by employing statistical bootstrapping (x10 and x100), or generated by an independent benchmark (sf-0.1 and sf-1 in the SNB evaluation). Our synthesis method is documented in [93].

Each SHS experiment essentially executed the same query over an increasing event sequence for thousands of times, therefore yielding measurements of an adequately large size. For the SNB, we conducted the experiments measuring the query execution time repeatedly and reported the averages of values. Moreover, we studied the benchmark characteristics and attempted to refine statements on the relationship between measurements and conclusions: for instance, for the SNB evaluation, we reported on the number of total additions of relevant nodes instead of the number of events per period.

Internal Validity In this context, threats to internal validity are influences which might have affected metrics, i.e., the query execution time and the memory consumption. In the following, we describe the measures that were taken to minimize such threats.

The experiments measured these metrics separately and systematically via a controlled simulation of an SHS and the partial execution of a benchmark. Multiple logs were used with an increasing event rate which evaluated the InTempo variants over an increasing load; all other aspects were kept identical: in the SHS case-study, the variants used the same metamodel and the same Monitor, Plan, Execute activities per experiment; similarly, the SNB experiments used the same metamodel, the same event sequences, and the same starting graph per scale factor. Both sets of experiments translated log events into model modifications. The experiments for Hawk used the same logs and translations as InTempo. The experiments for MonPoly entailed the systematic translation of events into relations based on fixed rules.

All experiments were performed on the same machine and, during the experiment, no other (active) task was run on the machine. The values reported in the results of the experiments for InTempo variants are based on the values reported by the JVM: the value of free memory was subtracted from the value of total memory. Before measuring memory consumption, we always suggested a garbage collection to the JVM. The values for MonPoly are based on statistics provided by the tool itself, which in turn relies on standard utilities available in UNIX operating systems.

Construct Validity Threats to construct validity refers to situations where the used metrics do not actually measure the construct, i.e., concept. We used the standard metrics of query execution time and memory consumption, measured in conventional ways. We have ensured that these metrics corresponded to the correct results. First, the detection of violations in the logs by the variants in the SHS evaluation has been manually double-checked by the authors and confirmed by MonPoly and Hawk. Additionally, the answers for the queries in the SNB evaluation have been confirmed by JAVA code. Finally, we have provided formal arguments to substantiate the claims on correctness of computation of temporal validity and the detection of all changes to temporal validity despite pruning the history representation.

External Validity Threats to external validity may restrict the generalization of our evaluation results outside the scope of the conducted experiments. In the following, we discuss threats which could influence the generalization of the experimental setting and measurements. Regarding the experimental setting, we have mitigated threats to external validity by creating the metamodel of the SHS case-study based on the artifact in [109], a peer-reviewed self-adaptation exemplar that has been used for the evaluation of solutions for self-adaptation. Moreover, the simulation used either real data or data that was synthesized based on the real data and enacted an instruction from a real medical guideline [89].

Similarly, the SNB experiments were based on the output of an established benchmark which employs sophisticated, well-documented methods for generating realistic data. Queries are similarly designed by experts to expose bottlenecks and stress-test the performance of graph-based technologies. The SNB data was generated using the default parameters and the metamodel used closely resembles the original—we have only added vertices specific to InTempo , i.e., the MonitorableEntity, or specific to the evaluation, i.e., links of interest were encoded as vertices. We made minor modifications to the data which fixed a small number of consistency issues, i.e., deletions occurring before creations, which probably stem from the fact that deletions in the benchmark’s output are a rather new feature [108].

Regarding the generalization of the performance measurements of InTempo variants, we have conducted two evaluations based on considerably different application domains. The evaluations featured metamodels and models which differed with respect to size and graph characteristics: the SHS data corresponded to a relatively simple star structure which demonstrated the effects of pruning; the SNB data corresponded to a rather large graph with numerous inter-connections which resembled real social networks. The SHS queries showed the merits of a graph-based temporal logic for graph structures in runtime monitoring scenarios, while the SNB queries explored innovative use-cases of this temporal logic. Our implementation performed solidly in both settings and its results were emphasized via the comparison to MonPoly and Hawk.

We have applied the following optimizations to our implementations. For the removal of elements from EMF models, we transparently replaced the potentially expensive native EMF method with an optimized version. The replacement was done via a JAVA agent which was used for both Hawk and InTempo. The two evaluations were rather different with respect to the evolution of the RTM\(^{\mathrm{H}}\): SHS started with a single vertex, whereas SNB started with a significantly large graph. Hence, we configured the Story Pattern Matcher, the pattern matching tool used by InTempo, with a different strategy for search plan generation for the two evaluations.

On the comparison, we note that MonPoly is not intended for pattern matching. Our emulated pattern matching, although optimized, might have room for improvement. MonPoly relies on a point-based interpretation while InTempo reasons over intervals which can lead to discrepancies between interpretations in more extensive comparisons. One of the main reasons MonPoly was chosen for the comparison is the compatibility of MTGL and MFOTL which allowed for a direct mapping of temporal operators and, therefore, reduced the risk of introducing any bias in translations. Based on this mapping, MonPoly could not monitor one of the properties used in the experiments. There might exist equivalent monitorable MFOTL formulas the syntax of which, however, would not match that of the MTGC.

We used Hawk in a manner compliant to the examples in the tool’s website [35] at the time of writing. Hawk offers features which could potentially improve the performance of the tool. For example, EOL offers a syntax extension that supports pattern matching. Hawk supports a method to create annotations, i.e., predicates over the occurrence of elements or the values of their attributes. Annotations are updated in each indexation, thus rendering the index capable of accessing annotated elements via a lookup—however, annotations have to be defined manually and before the indexation of the model starts. Furthermore, an annotation cannot refer to another annotation; thus, nesting predicates is not supported. Such optimizations, their applicability, and their trade-offs will be investigated in our future work.

10 Related work

InTempo consolidates model instances into a single model, the RTM\(^{\mathrm{H}}\), and answers queries with softFootnote 4 real-time timing constraints on the history of the \(\mathrm{RTM}^{\mathrm{H}}\). The RTM\(^{\mathrm{H}}\) is encoded as a graph and hence queries are captured by temporal graph queries which are executed incrementally. We first discuss work that relates to InTempo on the technical level of graphs, graph queries, and incremental query executions. Then, we relate InTempo to other work from the MDE community that can, either directly or indirectly, query the history of a model. Finally, we discuss relevant work from the RV community and a related approach to processing streams of events.

Graphs, Queries, and Incremental Execution Formally, InTempo is founded on typed attributed graph transformation rules [37]. Therefore, we limit the discussion to graphs although the model and model transformation rules could be encoded in other data models and related technologies, e.g., [65] and [76]. In the RTM\(^{\mathrm{H}}\), time is captured by vertex attributes which are assumed to be present for each vertex in the metamodel, i.e., the type graph. Query executions are performed via graph transformation rules which write or perform computations with these (distinguished) attributes. This rule behavior resembles the foundational approach presented in [54] where graph transformation rules are extended with a notion of time.

The Viatra framework [104, 105] stores a graph in memory and features a query execution engine which executes a query incrementally by decomposing it into sub-queries. Contrary to Viatra, InTempo seamlessly integrates a notion of time in the graph representation, the query language, and the execution framework. Viatra uses a decomposition strategy similar to a GDN called RETE [42], whose performance effect on InTempo we plan to investigate. A Viatra extension [102] distributes the pattern matching effort for sub-queries over multiple processing units, which is another interesting future direction for InTempo.

The setting assumed by Viatra is similar to the one of InTempo and resembles streaming [97] and active model transformations [13]: the graph (typically representing an RTM) is constantly being updated by a stream of graph elements or events that are mapped to graph elements; queries are executed after each change and their results are updated. Based on this setting, the Viatra-based solution Búr et al. by [24] captures safety properties via graph queries and is, therefore, capable of making assertions about the history of the graph. However, the solution does not support the integration of temporal statements into properties; properties only check for the occurrence of prohibited structures. Moreover, the solution does not support incremental query execution.

Implementations of time-aware graph databases typically build on a database back-end and introduce extensions which integrate a notion of time, e.g., [32, 59, 79]. Although back-ends with native support for graphs are capable of generating a push notification when a node or edge that meets certain conditions has been added or modified [2, 98] and, thus, of providing a certain degree of support for reactive settings, none of the implementations supports incremental execution of queries with complex patterns.

The approach presented in [6] analyzes a host graph stored into an in-memory database and extracts a sub-graph that features only elements that are relevant to a given query. Each change to the host graph triggers a new (incremental) analysis and an execution of the query over the sub-graph, which returns a complete answer set. The analysis does not integrate a notion of time—if it were to be extended so as to support temporal primitives and graphs with history, it could be used in conjunction with the timing-constraint-based pruning of InTempo to yield a sub-graph that would contain only elements that are both temporally and structurally relevant.

Solutions for graph stream analytics, e.g., [25, 58, 67, 71, 82] may also be seen as related to InTempo, since they are also concerned with the efficient storage and querying of graphs that are constantly updated. These solutions model history as a sequence of change-based snapshots stored either entirely or partly on disk. This allows for the storage of very large graphs but imposes an overhead on query executions. Moreover, models are not the primary focus of these solutions which may render their utilization in an MDE context cumbersome.

The languages presented by Rötschke and Schürr [91] and in our earlier work [68] build on Story Diagrams [112], a visual language for the specification of pattern-matching tasks, to specify temporal graph queries with timing constraints. The two languages however focus purely on specification and no tool support for either language exists.

Querying History in Model-driven Engineering The history of an RTM may be encoded via model versioning based on a (general-purpose) model repository, e.g., the solution by  Haeusler et al. [56]. There is however a considerable difference between the objectives of InTempo and those of model repositories. For instance, branching is seamlessly supported by repositories, whereas, although it could potentially be supported by an RTM\(^{\mathrm{H}}\), it is beyond our current scope. On the other hand, for repositories it is often assumed that queries will mostly concern a single timestamp, i.e., a specific version; repositories are therefore optimized for such queries and are less suitable for the on-the-fly execution of pattern-based queries that refer to a period of time—as Haeusler et al. acknowledge.

Solutions which focus on the history of an RTM typically use an on-disk database to store history and then facilitate the execution of database queries at runtime. Hawk [44], to which InTempo was compared, uses the time-aware graph database in [59]. The solutions in [51] and [80] use a map- and time-series-database, respectively. As the evaluation in Sect. 9 indicated, disk accesses take a significant toll on real-time querying performance, especially for far-reaching past queries which are likely to factor in multiple model or element instances in their runtime computations. Notably, Hawk offers the capability to annotate such queries manually prior to the execution, such that their matches are pre-computed while the system is being executed, which, however, is automatically accomplished by the temporal GDN in InTempo. Compared to an in-memory representation, on-disk databases offer increased scalability with respect to the size of the models that can be stored. All solutions above query history by means of OCL extensions that support temporal primitives—although Hawk ’s extension is more expressive. They also conveniently support attribute-level history (contrary to InTempo which requires for attributes to be encoded as nodes in the metamodel, see Sect. 9.5.2) and on-the-fly operations on attribute values, e.g., aggregation. However these features also increase the number of disk accesses during pattern matching.

Runtime Verification and Related Approaches Seen in a broader context, InTempo processes a sequence of events and verifies whether this sequence satisfies a temporal logic formula. This approach to verification is known as Runtime Verification (RV). In RV, an online algorithm incrementally processes a sequence of timestamped events and checks whether the prefix of the sequence processed so far satisfies a temporal logic property. The algorithm only stores data that are relevant to the property.

Based on this similarity, we compared the performance of InTempo to MonPoly [9, 10], a state-of-the-art RV tool. MonPoly is a command-line tool which notably combines a specification language which is adequately expressive for the use-cases discussed in this article with an efficient incremental monitoring algorithm. Its specification language is based on the Metric First-Order Temporal Logic [9] (MFOTL) which uses first-order relations to capture system entities and their relationships.

Besides MonPoly, solutions stemming from RV typically provide no or only partial support for key features of InTempo , e.g., events containing data, native or indirect support for graphs and bindings, and temporal operators with timing constraints: the solution by Dou et al. [34] presents a pattern-based RV technique which concerns propositional events, i.e., containing no data, and is thus unsuitable for use-cases of interest; a monitoring algorithm presented in [11] involves an interval computation similar to ours but concerns a propositional, past-only logic.

An exception is the tool DejaVu by Havelund and Peled [63] which can monitor properties specified in a first-order metric past-only logic with point-based semantics. Translating MTGCs in the DejaVu specification language would require emulating graph-based encodings (similar to MonPoly) and, moreover, reformulating MTGCs such that they feature only past operators. Such reformulations would be deprived of the one-to-one mapping of temporal operators in \(\mathscr {L}_\mathrm {T}\) and MFOTL and could be significantly less compact [61, 74]. Additionally, Havelund and Peled report that, currently, only timing constraints which span approx. 60 time units or less yield acceptable performance [62]. This restriction renders DejaVu unsuitable for the application scenarios targeted by InTempo.

The approach known as Complex Event Processing (CEP) focuses on processing a stream of events and detects temporal patterns based on the content and the ordering of events. Detected patterns generate complex events which form another stream and can be further processed [see [29]. Outwardly, the objective of RV is similar to that of CEP. However, the two have fundamental differences. Of interest to the present discussion are the following: languages used in CEP are not based on temporal logic (in fact, they often lack clear semantics [53]) which makes a direct comparison difficult; the capability of CEP to evaluate sequential patterns is typically limited, while of central importance to RV [57]—we refer to [57] for a more detailed discussion. Therefore, RV is considerably more relevant to InTempo.

Compared to InTempo, graph-based solutions that use CEP are limited in their ability to query the history of a model at runtime. Dávid et al. present a solution based on a streaming transformation of an RTM. The solution generates events when patterns are matched and then uses CEP to check whether generated events occur within a given time window, thus capturing, albeit indirectly and to a certain extent, temporal requirements on matched patterns [30]. The solution by Barquero et al. [5] stores a stream of data as a graph (on disk), and executes graph queries to detect event patterns. The solution introduces the notion of a spatial window to restrict the size of the graph and thus the search space for pattern matching. However, queries executed in the restricted search space may yield inaccurate results.

11 Conclusion and future work

Following the common practice of representing a runtime model as a typed attributed graph, we introduced a language for graph-based model queries which incorporates a temporal logic to enable the formulation of temporal graph queries. Moreover, we introduced a scheme which enables the incremental execution of temporal graph queries over runtime models with history, i.e., runtime models with a creation and deletion timestamp for each entity. The scheme offers the option to prune entities from the model that are not relevant to query executions, therefore reducing the memory footprint. By incorporating a temporal logic, the scheme is capable of monitoring temporal properties at runtime. Building on this capability, we apply the scheme in a runtime adaptation scenario where query matches in the model capture adaptation issues which are handled by in-place model transformations. We present an implementation which we evaluate based on a simulation of the scenario with real and synthetic data. We compare the performance of the implementation in detecting issues to a relevant runtime monitoring tool and a tool from the MDE community which is capable of querying the history of a model. Moreover, we evaluate the implementation with data which simulate the operation of a social network and are generated by an independent benchmark.

In the future, we plan to compare the query decomposition strategy in the framework of InTempo with the performance of other strategies. Moreover, we will investigate the number of events per second that InTempo can handle in other evaluation settings, which will indicate whether our scheme can be used for different application scenarios with considerably larger event streams. We will also consider technical optimizations such as the usage of indexing structures that can index matches based on their intervals and the optimization of patterns in auxiliary nodes of the evaluation framework. We plan to experiment with more aggressive pruning strategies for application scenarios with limited resources. An interesting research direction is the combination of InTempo with an EMF database which stores very old parts of the model and enables on-demand loading for the parts in question. A direction which holds vast potential is the opportunity to observe and learn from the history recorded in the model. Such knowledge could be subsequently used for predictions. Finally, we plan to utilize the ability of the scheme to detect the amount of time remaining for the violation of a property and enable more sophisticated decision-making for runtime adaptations.