A Framework for Parameterized Monitorability
Abstract
We introduce a general framework for Runtime Verification, parameterized with respect to a set of conditions. These conditions are encoded in the trace generated by a monitored process, which a monitor can observe. We present this parameterized framework in its general form and prove that it corresponds to a fragment of HML with recursion, extended with these conditions. We then show how this framework can be applied to a number of instantiations of the set of conditions.
1 Introduction
Runtime Verification (RV) is a lightweight verification technique that checks whether a system satisfies a correctness property by analysing the current execution of the system [20, 29], expressed as a trace of execution events. Using the additional information obtained at runtime, the technique can often mitigate state explosion problems typically associated with more traditional verification techniques. At the same time, limiting the verification analysis to the current execution trace hinders the expressiveness of RV when compared to more exhaustive approaches. In fact, there are correctness properties that cannot be satisfactorily verified at runtime (e.g. the finiteness of the trace considered up to the current execution point prohibits the verification of liveness properties). Because of this reason, RV is often used as part of a multipronged approach towards ensuring system correctness [5, 6, 8, 14, 15, 25], complementing other verification techniques such as model checking, testing and type checking.
In order to attain an effective verification strategy consisting of multiple verification techniques that include RV, it is crucial to understand the expressive power of each technique: one can then determine how to best decompose the verification burden into subtasks that can then be assigned to the most appropriate verification technique. Monitorability concerns itself with identifying the properties that are analysable by RV. In [21, 22] (and subsequently in [2]), the problem of monitorability was studied for properties expressed in a variant of the modal \(\mu \)calculus [26] called \(\mu \mathrm{HML}\) [28]. The choice of the logic was motivated by the fact that it can embed widely used logics such as CTL and LTL, and by the fact that it is agnostic of the underlying verification method used—this leads to better separation of concerns and guarantees a good level of generality for the results obtained. The main result in [2, 21, 22] is the identification of a monitorable syntactic subset of the logic \(\mu \mathrm{HML}\) (i.e., a set of logical formulas for which monitors carrying out the necessary runtime analysis exist) that is shown to be maximally expressive (i.e., any property that is monitorable in the logic may be expressed in terms of this syntactic subset). We are unaware of other maximality results of this kind in the context of RV.
In this work we strive towards extending the monitorability limits identified in [2, 21, 22] for \(\mu \mathrm{HML}\). Particularly, for any logic or specification language, monitorability is a function of the underlying monitoring setup. In [2, 21, 22], the framework assumes a classical monitoring setup, whereby a (single) monitor incrementally analyses an ordered trace of events describing the computation steps that were executed by the system. A key observation made by this paper is that, in general, execution traces need not be limited to the reporting of events that happened. For instance, they may describe events that could not have happened at specific points in the execution of a system. Alternatively, they may also include descriptions for depthbounded trees of computations that were possible at specific points in an execution. We conjecture that there are instances where this additional information can be feasibly encoded in a trace, either dynamically or by way of a preprocessing phase (based, e.g., on the examination of logs of previous system executions, or on the full static checking of subcomponents making up the system). More importantly, this additional information could, in principle, permit the verification of more properties at runtime.
 1.
We show how these aspects can be expressed and studied in a general monitoring framework with (abstract) conditions, Theorems 3 and 4 resp. in Sects. 3 and 5.
 2.
We instantiate the general framework with trace conditions that describe the inability to perform actions, amounting to refusals [31], Propositions 1 and 5.
 3.
We also instantiate the framework with conditions describing finite execution graphs, amounting to the recursionfree fragment of the logic [24], Propositions 2 and 3.
 4.
Finally, we instantiate the framework with trace conditions that record information from previous monitored runs of the system, Proposition 4. This, in turn, leads us to a notion of alternating monitoring that allows monitors to aggregate information over monitored runs. We show that this extends the monitorable fragment of our logic in a natural and significant way.
The remainder of the paper is structured as follows. After outlining the necessary preliminaries in Sect. 2, we develop our parameterized monitoring framework with conditions in Sect. 3 for a monitoring setup that allows monitors to observe both silent and external actions of systems. The two condition instantiations for this strong setting are presented in Sect. 4. In Sect. 5 we extend the parameterized monitoring framework with conditions to a weak monitoring setup that abstracts from internal moves, followed by two instantiations similar to those presented in Sect. 4. Section 6 concludes by discussing related and future work.
2 Background
Example 1
Specification Logic. Properties about the behaviour of processes may be specified via the logic \(\mu \mathrm{HML}\) [4, 28], a reformulation of the modal \(\mu \)calculus [26].
Definition 1
The logic \(\mu \mathrm{HML}\) is very expressive. It is also agnostic of the technique to be employed for verification. The property of monitorability, however, fundamentally relies on the monitoring setup considered.
Monitoring Systems. A monitoring setup on \(\textsc {Act} \) is a triple \( \langle M, I, L \rangle , \) where L is a system LTS on \(\textsc {Act} \), M is a monitor LTS on \(\textsc {Act} \), and I is the instrumentation describing how to compose L and M into an LTS, denoted by I(M, L), on \(\textsc {Act} \). We call the pair (M, I) a monitoring system on \(\textsc {Act} \). For \(M = \langle \textsc {Mon}, \textsc {Act},\rightarrow _M \rangle \), \(\textsc {Mon}\) is set of monitor states (ranged over by m) and \(\rightarrow _M\) is the monitor semantics described in terms of the behavioural state transitions a monitor takes when it analyses trace events \(\mu \in \textsc {Act} \cup \{\tau \}\). The states of the composite LTS I(M, L) are written as \(m \triangleleft p\), where m is a monitor state and p is a system state; the monitoredsystem transition relation is denoted here by \(\rightarrow _{I(M,L)}\). We present our results with a focus on rejection monitors, i.e., monitors with a designated rejection state no, and hence safety fragments of the logic \(\mu \mathrm{HML}\). However, our results and arguments apply dually to acceptance monitors (with a designated acceptance state yes) and cosafety properties; see [21, 22] for details.
Definition 2
We define monitorability for \(\mu \text {HML}\) in terms of monitoring systems (M, I).
Definition 3

For all closed \(\varphi \in \varLambda \), there exists an m from M that (M, I)monitors for \(\varphi \).

For all m of M, there exists a closed \(\varphi \in \varLambda \) that is (M, I)monitored by m. \(\blacksquare \)
We note that if a monitoring system and a fragment \(\varLambda \) of \({\mu \text {HML}}\) satisfy the conditions of Definition 3, then \(\varLambda \) is the largest fragment of \({\mu \text {HML}}\) that is monitored by the monitoring system. Stated otherwise, any other logic fragment \(\varLambda '\) that satisfies the conditions of Definition 3 must be equally expressive to \(\varLambda \), i.e., \(\forall \varphi ' \in \varLambda '\cdot \exists \varphi \in \varLambda \cdot \varphi \equiv \varphi '\) and vice versa. Definition 3 can be dually given for acceptancemonitorability, when considering acceptance monitors. We next review two monitoring systems that respectively rejectionmonitor for two different fragments of \(\mu \mathrm{HML}\). We omit the corresponding monitoring systems for acceptancemonitors, that monitor for the dual fragments of \(\mu \mathrm{HML}\).
The Basic Monitoring Setup. The following monitoring system, presented in [2], does not distinguish between silent actions and external actions.
Definition 4
where x comes from a countably infinite set of monitor variables. Constant \(\textsf {no}\) denotes the rejection verdict state whereas \(\textsf {end}\) denotes the inconclusive verdict state. The basic monitor LTS \(M_b\) is the one whose states are the closed monitors of \(\textsc {Mon}_b\) and whose transition relation is defined by the (standard) rules in Table 1 (we elide the symmetric rule for \(m+n\)). \(\blacksquare \)
Note that by rule mVrd in Table 1, verdicts are irrevocable and monitors can only describe suffixclosed behaviour.
Definition 5
Given a system LTS L and a monitor LTS M that agree on \(\textsc {Act} \), the basic instrumentation LTS, denoted by \(I_b(M,L)\), is defined by the rules iMon and iTer in Table 1. (We do not consider rule iAbs for now.) \(\blacksquare \)
Behaviour and instrumentation rules for monitored systems (\(v {\in } \{\textsf {end}, \textsf {no}\}\)).
Monitor semantics 
Instrumentation semantics 
We refer to the pair \((M_b,I_b)\) from Definitions 4 and 5 as the basic monitoring system. For each system LTS L that agrees with the full monitoring system on \(\textsc {Act} \), we can show a correspondence between the respective monitoring setup \(\langle M_b, I_b, L \rangle \) and the following syntactic subset of \(\mu \mathrm{HML}\).
Definition 6
Theorem 1
([2]). The basic monitoring system \((M_b,I_b)\) monitors for the logical fragment sHML. \(\square \)
Definition 8
The set \(\textsc {Mon}_e\) of external monitors on \(\textsc {Act} \) contains all the basic monitors that do not use the silent action \(\tau \). The corresponding external monitor LTS \(M_e\), is defined similarly to \(M_b\), but with the closed monitors in \(\textsc {Mon}_e\) as its states. External instrumentation, denoted by \(I_e\), is defined by the three rules iMon, iTer and iAbs in Table 1, where in the case of iMon and iTer, action \(\mu \) is substituted by the external action \(\alpha \). We refer to the pair \((M_e,I_e)\) as the external monitoring system, amounting to the setup in [21, 22].\(\blacksquare \)
Theorem 2
([22]). The external monitoring system \((M_e,I_e)\) rejectionmonitors for the logical fragment WsHML. \(\square \)
3 Monitors that Detect Conditions
Given a set of processes P, a pair (C, r) is a condition framework when C is a nonempty set of conditions and \(r:C\rightarrow 2^P\) is a valuation function. We assume a fixed condition framework (C, r) and we extend the syntax and semantics of \(\mu \mathrm{HML}\) so that for every condition \(c \in C\), both c and \(\lnot c\) are formulas and for every LTS L on set of processes P, \([\![{c}]\!]= r(c)\) and \([\![{\lnot c}]\!]= P \setminus r(c)\). We call the extended logic \(\mu \mathrm{HML}^{(C,r)}\). Since, in all the instances we consider, r is easily inferred from C, it is often omitted and we simply write C instead of (C, r) and \(\mu \mathrm{HML}^{(C,r)}\) as \(\mu \mathrm{HML}^{C}\). We say that process p satisfies c when \(p \in [\![{c}]\!]\). We assume that C is closed under negation, meaning that for every \(c \in C\), there is some \(c'\in C\), such that \([\![{c'}]\!]= [\![{\lnot c}]\!]\). Conditions represent certain properties of processes that the instrumentation is able to report.
We extend the syntax of monitors, so that if m is a monitor and c a condition, then c.m is a monitor. The idea is that if c.m detects that the process satisfies c, then it can transition to m.
Definition 9
Definition 10
where \(c \in C\). We note that \(\lnot c \vee \varphi \) can be viewed as an implication \(c \rightarrow \varphi \) asserting that if c holds, then \(\varphi \) must also hold. \(\blacksquare \)
It is immediate to see that \(\mathrm{sHML}^C\) is a fragment of \({\mu \text {HML}}^{C}\) and when \(C \subseteq {\mu \text {HML}}\), it is also a fragment of \(\mu \text {HML}\). Finally, if C is closed under negation, then \(\lnot c \vee \varphi \) can be rewritten as \(c' \vee \varphi \), where \([\![{c'}]\!]= [\![{\lnot c}]\!]\), and in the following we often take advantage of this equivalence to simplify the syntax of \(\mathrm{sHML}^C\).
Theorem 3
The monitoring system \((M_b^C,I_b^C)\) monitors for \(\mathrm{sHML}^C\). \(\square \)
We note that Theorem 3 implies that \(\mathrm{sHML}^{C}\) is the largest monitorable fragment of \({\mu \text {HML}}^{C}\), relative to C.
4 Instantiations
We consider two possible instantiations for parameter C in the framework presented in Sect. 3. Since each of these instantiations consists of a fragment from the logic \(\mu \mathrm{HML}\) itself, they both show how monitorability for \(\mu \mathrm{HML}\) can be extended when using certain augmented traces.
4.1 The Inability to Perform an Action
The monitoring framework of [2, 22] (used also in other works such as [18, 19]), is based on the idea that, while a system is executing, it performs discrete computational steps called events (actions) that are recorded and relayed to the monitor for analysis. Based on the analysed events, the monitor then transitions from state to state. One may however also consider instrumentations that record a system’s inability to perform a certain action. Examples of this arise naturally in situations where actions are requested unsuccessfully by an external entity on a system, or whenever the instrumentation is able to report system stability (i.e., the inability of performing internal actions). For instance, such observations were considered in [1, 31], in the context of testing preorders.
Proposition 1
The monitoring system \((M_b^{F_\textsc {Act}}\!,I_b^{F_\textsc {Act}}\!)\) monitors for the logical fragment \(\mathrm{sHML}^{F_\textsc {Act}}\). \(\square \)
A special case of interest are monitors that can detect process stability, i.e., processes satisfying \([ \tau ]\textsf {ff}\). Such monitors monitor for \(\mathrm{sHML}^{\{[ \tau ]\textsf {ff}\}}\), namely sHML from Definition 6 extended with formulas of the form \(\langle \tau \rangle \textsf {tt}\vee \varphi \).
4.2 DepthBounded Static Analysis
Proposition 2
The monitoring system \((M_b^{\mathrm{HML}},I_b^{\mathrm{HML}})\) monitors for the logical fragment \(\mathrm{sHML}^{\mathrm{HML}}\). \(\square \)
Instead of HML, we can alternatively use a fragment \(\mathrm{HML}^d\) of HML that only allows formulas with nesting depth for the modalities of at most d. Since the complexity of checking HML formulas is directly dependent on this modal depth, there are cases where the overheads of checking such formulas are deemed to be low enough to be adequately checked for at runtime instead of checking for them statically.
5 Extending External Monitorability
5.1 External Monitoring with Conditions
We define the external monitoring system with conditions similarly to Sect. 3. The syntax of Definition 8 is extended so that, for any instance of C, if m is a monitor and c a condition from C, then c.m is a monitor.
Definition 11
where \(c \in C\). Cmonitor behaviour is defined as in Table 1, but extending rule mAct to condition prefixes that generate condition actions (i.e., \(\mu \) ranges over \(\textsc {Act} \cup C\)). We call the resulting monitor LTS \(M^C_e\).
For the instrumentation relation called \(I^C_e\), we consider the rules iMon, iTer from Table 1 for external actions \(\alpha \) instead of the general action \(\mu \), rule iAbs from the same table, and rule iCon from Sect. 3. \(\blacksquare \)
Note that the monitoring system \((M^C_e,I^C_e)\) may be used to detect \(\tau \)transitions implicitly—we conjecture that this cannot be avoided in general. Consider two conflicting conditions \(c_1\) and \(c_2\), i.e., \([\![{c_1}]\!]{\cap } [\![{c_2}]\!]{=} \emptyset \). Definition 11 permits monitors of the form \(c_1{.}c_2{.}m\) that encode the fact that state m can only be reached when the system under scrutiny performs a nonempty sequence of \(\tau \)moves to transition from a state satisfying \(c_1\) to another state satisfying \(c_2\). This, in some sense, is also related to obscure silent action monitoring studied in [2].
Definition 12
Theorem 4
The monitoring system \((M^C_e,I^C_e)\) monitors for \(\mathrm{WsHML}^{C}\). \(\square \)
We highlight the need to insulate the appearance of the implication \(\lnot c \vee \varphi \) from internal system behaviour by using the modality \([[ \varepsilon ]]\) in Definition 12. For conditions that are invariant under \(\tau \)transitions, this modality is not required but it cannot be eliminated otherwise; we revisit this point in Example 2.
5.2 Instantiating External Monitors with Conditions
We consider three different instantiations to our parametric external monitoring system of Sect. 5.1.
Proposition 3
The monitoring system \((M_e^{w\mathrm{HML}},I_e^{w\mathrm{HML}})\) monitors for the logical fragment \(\mathrm{WsHML}^{w\mathrm{HML}}\). \(\square \)
An important observation (that is perhaps surprising) is that \(\mathrm{WsHML}^{w\mathrm{HML}}\) is not a fragment of \(\mathrm{W}{\mu \mathrm{HML}}\), as the following example demonstrates.
Example 2
Previous Runs and Alternating Monitoring. A monitoring system could reuse information from previous system runs, perhaps recorded as execution logs, and whenever (sub)traces can be associated with specific states of the system, these can also be used as an instantiation for our parametric framework. More concretely, in [21, 22] it is shown that traces can be used to characterise the violation of \(\mathrm{WsHML}\) formulas, or the satisfaction of formulas from the dual fragment, \(\mathrm{WcHML}\), defined below.
Definition 13
Proposition 4
The monitoring system \((M_e^{\mathrm{WcHML}},I_e^{\mathrm{WcHML}})\) rejectionmonitors for the logical fragment \(\mathrm{WsHML}^{\mathrm{WcHML}}\). \(\square \)
One should observe that in this case, \(\mathrm{WsHML}^\mathrm{WcHML}\) is a fragment of \(\mathrm{W}{\mu \mathrm{HML}}\), in contrast to the previous instantiation \(\mathrm{WsHML}^{w\mathrm{HML}}\) from Sect. 5.2.
Lemma 1
For every \([[ \varepsilon ]](\eta \vee \varphi ) \in \mathrm{WsHML}^\mathrm{WcHML}\) (where \(\eta \in \mathrm{WsHML}\)), we have \([[ \varepsilon ]](\eta \vee \varphi ) \equiv \eta \vee \varphi \). \(\square \)
Corollary 1
For every formula in \(\mathrm{WsHML}^\mathrm{WcHML}\), there is a logically equivalent formula in \(\mathrm{W}{\mu \mathrm{HML}}\). \(\square \)

\(\mathrm{WsHML}^1 = \mathrm{WsHML}\) and \(\mathrm{WcHML}^1 = \mathrm{WcHML}\); and

\(\mathrm{WsHML}^{i+1} = \mathrm{WsHML}^{\mathrm{WcHML}^i}\) and \(\mathrm{WcHML}^{i+1} = \mathrm{WcHML}^{\mathrm{WsHML}^i}\).
Failure to Execute an Action and Refusals. In Subsect. 4.1, we instantiated the condition set C as the set of formulas from \({\mu \text {HML}}\) that assert the inability of a process to perform an action. These formulas are of the form \([ \alpha ]\textsf {ff}\). We recast this approach in the setting of weak monitorability. In this setting where the monitoring system and the specification formulas ignore any silent transitions, the inability of a process to perform an \(\alpha \)transition acquires a different meaning from the one used for the basic system. In particular, we consider a stronger version of these conditions that incorporates stability; this makes them invariant over \(\tau \)transitions. We say that p refuses \(\alpha \) when Open image in new window and Open image in new window . In [31], a very similar notion is used for refusal testing (see also [1]). Thus, much in line with [31], we use the following definition.
Definition 14
A process p of an LTS L refuses action \(\alpha \in \textsc {Act}\) and write \(p \ \texttt {ref}\ \alpha \) when Open image in new window and Open image in new window . The set of conditions that corresponds to refusals is thus \(R_\textsc {Act} = \{ [ \tau ] \textsf {ff}\wedge [ \alpha ]\textsf {ff}\mid \alpha \in \textsc {Act} \}.\) \(\blacksquare \)
Again, \(\langle \tau \rangle \textsf {tt}\vee \langle \alpha \rangle \textsf {tt}\vee \varphi \) is best read as the implication \(([ \tau ] \textsf {ff}\wedge [ \alpha ]\textsf {ff})\rightarrow \varphi \): if the process is stable and cannot perform an \(\alpha \)transition, then \(\varphi \) must hold.
Proposition 5
The monitoring system \((M_e^{R_\textsc {Act}},I_e^{R_\textsc {Act}})\) monitors for the logical fragment \(\mathrm{WsHML}^{R_\textsc {Act}}\). \(\square \)
Example 3
Example 4
6 Conclusions
In order to devise effective verification strategies that straddle between the pre and postdeployment phases of software production, one needs to understand better the monitorability aspects of the correctness properties that are to be verified. We have presented a general framework that allows us to determine maximal monitorable fragments of an expressive logic that is agnostic of the verification technique employed, namely \(\mu \mathrm{HML}\). By way of a number of instantiations, we also show how the framework can be used to reason about the monitorability induced by various forms of augmented traces. Our next immediate concern is to validate the proposed instantiations empirically by constructing monitoring systems and tools that are based on these results, as we did already for the original monitorability results of [21, 22] in [9, 10, 12].
Related Work. Monitorability for \(\mu \mathrm{HML}\) was first examined in [21, 22]. This work introduced the external monitoring system and identified \(\mathrm{WsHML}\) as the largest monitorable fragment of \(\mu \mathrm{HML}\), with respect to that system. The ensuring work in [2] focused on monitoring setups that can distinguish silent actions to a varying degree, and introduced the basic monitoring system, showing analogous monitorability results for \(\mu \text {HML}\).
Monitorability has also been examined for languages defined over traces, such as LTL. Pnueli and Zaks in [32] define a notion of monitorability over traces, although they do not attempt maximal monitorability results. Diekert and Leuckert revisited monitorability from a topological perspective in [16]. Falcone et al. in [17] extended the work in [32] to incorporate enforcement and introduced a notion of monitorability on traces that is parameterized with respect to a truth domain that corresponds to our separation to acceptance and rejectionmonitorable properties. In [13], the authors use a monitoring system that can generate derivations of satisfied formulas from a fragment of LTL. However, they do not argue that this fragment is somehow maximal. There is a significant body of work on synthesizing monitors from LTL formulas, e.g. [13, 23, 33, 35], and it would be worth investigating whether our general techniques for monitor synthesis can be applied effectively in these cases.
Phillips introduced refusal testing in [31] as a way to extend the capabilities of testing (see [18] for a discussion on how our monitoring setup relates to testing preorders). The meaning of refusals in [31] is very close to the one in Definition 14 and it is interesting to note how Phillips’ use of tests for refusal formulas is similar to our monitoring mechanisms for refusals. Abramsky [1] uses refusals in the context of a much more powerful testing machinery, in order to identify the kind of testing power that is required for distinguishing nonbisimilar processes.
The decomposition of the verification burden across verification techniques, or across iterations of alternating monitoring runs as presented in Sect. 5, can be seen as a method for quotienting. In [7] Andersen studies quotienting of the specification logics discussed in this paper to reduce the statespace during model checking and thus increase its efficiency (see also [27] for a more recent treatment). The techniques used rely heavily on the model’s concurrency constructs and may produce formulas that are larger in size than the original, but which can be checked against a smaller component of the model. In multipronged approaches to verification one would expect to encounter similar difficulties occasionally.
References
 1.Abramsky, S.: Observation equivalence as a testing equivalence. Theor. Comput. Sci. 53(2–3), 225–241 (1987)MathSciNetCrossRefGoogle Scholar
 2.Aceto, L., Achilleos, A., Francalanza, A., Ingólfsdóttir, A.: Monitoring for silent actions. In: 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2017 (2017, to appear)Google Scholar
 3.Aceto, L., Achilleos, A., Francalanza, A., Ingólfsdóttir, A., Kjartansson, S.Ö.: Determinizing monitors for HML with recursion. CoRR abs/1611.10212 (2016)Google Scholar
 4.Aceto, L., Ingólfsdóttir, A., Larsen, K.G., Srba, J.: Reactive Systems: Modelling, Specification and Verification. Cambridge University Press, New York (2007)CrossRefGoogle Scholar
 5.Ahrendt, W., Chimento, J.M., Pace, G.J., Schneider, G.: A specification language for static and runtime verification of data and control properties. In: Bjørner, N., de Boer, F. (eds.) FM 2015. LNCS, vol. 9109, pp. 108–125. Springer, Cham (2015). https://doi.org/10.1007/9783319192499_8CrossRefGoogle Scholar
 6.Aktug, I., Naliuka, K.: ConSpec  a formal language for policy specification. Sci. Comput. Programm. 74(1–2), 2–12 (2008)MathSciNetCrossRefGoogle Scholar
 7.Andersen, H.R.: Partial model checking (extended). In: Proceedings of Tenth Annual IEEE Symposium on Logic in Computer Science, pp. 398–407. IEEE (1995)Google Scholar
 8.Artho, C., Barringer, H., Goldberg, A., Havelund, K., Khurshid, S., Lowry, M.R., Pasareanu, C.S., Rosu, G., Sen, K., Visser, W., Washington, R.: Combining test case generation and runtime verification. Theor. Comput. Sci. 336(2–3), 209–234 (2005)MathSciNetCrossRefGoogle Scholar
 9.Attard, D.P., Francalanza, A.: A monitoring tool for a branchingtime logic. In: Falcone, Y., Sánchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 473–481. Springer, Cham (2016). https://doi.org/10.1007/9783319469829_31CrossRefGoogle Scholar
 10.Attard, D.P., Francalanza, A.: Trace partitioning and local monitoring for asynchronous components. In: Cimatti, A., Sirjani, M. (eds.) SEFM 2017. LNCS, vol. 10469, pp. 219–235. Springer, Cham (2017). https://doi.org/10.1007/9783319661971_14CrossRefGoogle Scholar
 11.Biere, A., Cimatti, A., Clarke, E., Zhu, Y.: Symbolic model checking without BDDs. In: Cleaveland, W.R. (ed.) TACAS 1999. LNCS, vol. 1579, pp. 193–207. Springer, Heidelberg (1999). https://doi.org/10.1007/3540490590_14CrossRefGoogle Scholar
 12.Cassar, I., Francalanza, A.: On implementing a monitororiented programming framework for actor systems. In: Ábrahám, E., Huisman, M. (eds.) IFM 2016. LNCS, vol. 9681, pp. 176–192. Springer, Cham (2016). https://doi.org/10.1007/9783319336930_12CrossRefGoogle Scholar
 13.Cini, C., Francalanza, A.: An LTL proof system for runtime verification. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 581–595. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466810_54CrossRefGoogle Scholar
 14.Decker, N., Leucker, M., Thoma, D.: jUnit\(^{\text{ RV }}\)–adding runtime verification to jUnit. In: Brat, G., Rungta, N., Venet, A. (eds.) NFM 2013. LNCS, vol. 7871, pp. 459–464. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642380884_34CrossRefGoogle Scholar
 15.Desai, A., Dreossi, T., Seshia, S.A.: Combining model checking and runtime verification for safe robotics. In: Lahiri, S., Reger, G. (eds.) RV 2017. LNCS, vol. 10548, pp. 172–189. Springer, Cham (2017). https://doi.org/10.1007/9783319675312_11CrossRefGoogle Scholar
 16.Diekert, V., Leucker, M.: Topology, monitorable properties and runtime verification. Theor. Comput. Sci. 537, 29–41 (2014)MathSciNetCrossRefGoogle Scholar
 17.Falcone, Y., Fernandez, J.C., Mounier, L.: What can you verify and enforce at runtime? Int. J. Softw. Tools Technol. Trans. 14(3), 349–382 (2012)CrossRefGoogle Scholar
 18.Francalanza, A.: A theory of monitors. In: Jacobs, B., Löding, C. (eds.) FoSSaCS 2016. LNCS, vol. 9634, pp. 145–161. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662496305_9CrossRefzbMATHGoogle Scholar
 19.Francalanza, A.: Consistentlydetecting monitors. In: Meyer, R., Nestmann, U. (eds.) 28th International Conference on Concurrency Theory (CONCUR 2017). LIPIcs, vol. 85, pp. 8:1–8:19. Schloss Dagstuhl, Dagstuhl (2017)Google Scholar
 20.Francalanza, A., Aceto, L., Achilleos, A., Attard, D.P., Cassar, I., Della Monica, D., Ingólfsdóttir, A.: A foundation for runtime monitoring. In: Lahiri, S., Reger, G. (eds.) RV 2017. LNCS, vol. 10548, pp. 8–29. Springer, Cham (2017). https://doi.org/10.1007/9783319675312_2CrossRefGoogle Scholar
 21.Francalanza, A., Aceto, L., Ingolfsdottir, A.: On verifying HennessyMilner logic with recursion at runtime. In: Bartocci, E., Majumdar, R. (eds.) RV 2015. LNCS, vol. 9333, pp. 71–86. Springer, Cham (2015). https://doi.org/10.1007/9783319238203_5CrossRefGoogle Scholar
 22.Francalanza, A., Aceto, L., Ingolfsdottir, A.: Monitorability for the HennessyMilner logic with recursion. Formal Meth. Syst. Des. (FMSD) 51(1), 87–116 (2017)CrossRefGoogle Scholar
 23.Geilen, M.: On the construction of monitors for temporal logic properties. Electron. Notes Theor. Comput. Sci. 55(2), 181–199 (2001)CrossRefGoogle Scholar
 24.Hennessy, M., Milner, R.: Algebraic laws for nondeterminism and concurrency. J. ACM 32(1), 137–161 (1985)MathSciNetCrossRefGoogle Scholar
 25.Kejstová, K., Ročkai, P., Barnat, J.: From model checking to runtime verification and back. In: Lahiri, S., Reger, G. (eds.) RV 2017. LNCS, vol. 10548, pp. 225–240. Springer, Cham (2017). https://doi.org/10.1007/9783319675312_14CrossRefGoogle Scholar
 26.Kozen, D.: Results on the propositional \(\mu \)calculus. Theor. Comput. Sci. 27(3), 333–354 (1983)MathSciNetCrossRefGoogle Scholar
 27.Lang, F., Mateescu, R.: Partial model checking using networks of labelled transition systems and boolean equation systems. Log. Meth. Comput. Sci. 9(4), 1–32 (2013)MathSciNetzbMATHGoogle Scholar
 28.Larsen, K.G.: Proof systems for satisfiability in HennessyMilner logic with recursion. Theor. Comput. Sci. 72(2), 265–288 (1990)MathSciNetCrossRefGoogle Scholar
 29.Leucker, M., Schallhart, C.: A brief account of runtime verification. J. Log. Algebraic Program. 78(5), 293–303 (2009)CrossRefGoogle Scholar
 30.Milner, R.: Communication and Concurrency. PrenticeHall Inc, Upper Saddle River (1989)zbMATHGoogle Scholar
 31.Phillips, I.: Refusal testing. Theor. Comput. Sci. 50(3), 241–284 (1987)MathSciNetCrossRefGoogle Scholar
 32.Pnueli, A., Zaks, A.: PSL model checking and runtime verification via testers. In: Misra, J., Nipkow, T., Sekerinski, E. (eds.) FM 2006. LNCS, vol. 4085, pp. 573–586. Springer, Heidelberg (2006). https://doi.org/10.1007/11813040_38CrossRefGoogle Scholar
 33.Sen, K., Roşu, G., Agha, G.: Generating optimal linear temporal logic monitors by coinduction. In: Saraswat, V.A. (ed.) ASIAN 2003. LNCS, vol. 2896, pp. 260–275. Springer, Heidelberg (2003). https://doi.org/10.1007/9783540409656_17CrossRefGoogle Scholar
 34.Stirling, C.: Modal and Temporal Properties of Processes. Springer, New York (2001)CrossRefGoogle Scholar
 35.Vardi, M.Y.: An automatatheoretic approach to linear temporal logic. In: Moller, F., Birtwistle, G. (eds.) Logics for Concurrency. LNCS, vol. 1043, pp. 238–266. Springer, Heidelberg (1996). https://doi.org/10.1007/3540609156_6CrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.