A Theory of Monitors
 16 Citations
 481 Downloads
Abstract
We develop a behavioural theory for monitors — software entities that passively analyse the runtime behaviour of systems so as to infer properties about them. First, we extend the monitor language and instrumentation relation of [17] to handle piCalculus process monitoring. We then identify contextual behavioural preorders that allow us to relate monitors according to criteria defined over monitored executions of piCalculus processes. Subsequently, we develop alternative monitor preorders that are more tractable, and prove fullabstraction for the latter alternative preorders with respect to the contextual preorders.
Keywords
External Action Label Transition System Software Entity Monitor Process Arbitrary Process1 Introduction
Monitors (execution montors [32]) are software entities that are instrumented to execute along side a program so as determine properties about it, inferred from the runtime analysis of the exhibited (program) execution; this basic monitor form is occasionally termed (sequence) recognisers [28]. In other settings, monitors go further and either adapt aspects of the monitored program [7, 11, 22] or enforce predefined properties by modifying the observable behaviour [4, 15, 28]. Monitors are central to software engineering techniques such as monitororiented programming [16] and failfast design patterns [8] used in faulttolerant systems [19, 34]; they are also used exten sively in runtime verification [27], a lightweight verification technique that attempts to mitigate state explosion problems associated with fullblown verification methods such as model checking.
Example 1
There are various reasons why such preorders are useful. For a start, they act as notions of refinement: they allow us to formally specify properties that are expected of a monitor M by expressing them in terms of a monitor description, \(\textsf {Spec}_M\), and then requiring that \(\textsf {Spec}_M \sqsubseteq M\) holds. Moreover, our preorders provide a formal understanding for when it is valid to substitute one monitor implementation for another while preserving elected monitoring properties. We consider a general model that allows monitors to behave nondeterministically; this permits us to study the cases where nondeterminism is either tolerated or considered erroneous. Indeed, there are settings where determinism is unattainable (e.g., distributed monitoring [18, 33]). Occasionally, nondeterminism is also used to expresses underspecification in program refinement.
Although formal and intuitive, the preorders alluded to in (1) turn out to be hard to establish. One of the principal obstacles is the universal quantification over all possible processes for which the monitoring properties should hold. We therefore develop alternative characterisations for these preorders, \(M_1 \le M_2\), that do not rely on this universal quantification over process instrumentation. We show that such relations are sound wrt. the former monitor preorders, which serves as a semantic justification for the alternative monitor preorders. More importantly, however, it also allows us to use the more tractable alternative relations as a proof technique for establishing inequalities in the original preorders. We also show that these characterisations are complete, thereby obtaining fullabstraction for these alternative preorders.
2 The Language
Figure 1 presents our process language, a standard version of the piCalculus. It has the usual constructs and assumes separate denumerable sets for channel names \(c,d,a,b \in \textsc {Chans} \), variables \(x, y, z\in \textsc {Vars} \) and process variables, \(X, Y\in \textsc {PVars} \), and lets identifiers \(u,\,v\) range over the sets, \(\textsc {Chans} \cup \textsc {Vars} \). The input construct, \(c \textsf {?} x . P\), the recursion construct, \(\textsf {rec}\,X.P\), and the scoping construct, \(\textsf {new}\,c. P\), are binders where the free occurrences of the variable x, the process variable \(X\), and the channel cresp., are bound in the guarded body P. We write \(\mathbf{fv }(P), \mathbf{fV }(P),\mathbf{fn }(P), \mathbf{bV }(P),\mathbf{bv }(P)\) and \(\mathbf{bn }(P)\) for the resp. free/bound variables, process variables and names in P. We use standard syntactic conventions e.g., we identify processes up to renaming of bound names and variables (alpha conversion). For arbitrary syntactic objects \(o, o'\), we write \(o \,\sharp \,o'\) when the free names in o and \(o'\) are disjoint e.g.,\(P \,\sharp \,Q\) means \(\mathbf{fn }(P) \cap \mathbf{fn }(Q) = \emptyset \).
Definition 1
We lift the functions in Definition 1 to traces, e.g.,\(\mathbf{aftr }(I,s)\), in the obvious way and denote successive transitions \(I \triangleright P \xrightarrow {\;\mu _1\;} P_1\) and \(\mathbf{aftr }(I,\mu _1) \triangleright P_1\xrightarrow {\;\mu _2\;} P_2\) as \(I \triangleright P \xrightarrow {\;\mu _1\;} \mathbf{aftr }(I,\mu _1) \triangleright P_1 \xrightarrow {\;\mu _2\;} P_2\). We write \(I \triangleright P\not \!\xrightarrow {\;\mu \;}\) to denote \(\not \!\exists P' \cdot I \triangleright P\xrightarrow {\;\mu \;} P'\) and Open image in new window to denote \(I_0 \triangleright P_0 \xrightarrow {\;\mu _1\;} I_1 \triangleright P_1 \xrightarrow {\;\mu _2\;} I_2 \triangleright P_2 \ldots \xrightarrow {\;\mu _n\;} P_n\) where \(P_0 = P\), \(P_n=Q\), \(I_0=I\), \(I_i = \mathbf{aftr }(I_{i1},\mu _i)\) for \(i\in 1..n\), and \(s\) is equal to \(\mu _1\ldots \mu _n\) after filtering \(\tau \) labels.
Example 2
3 Monitor Instrumentation
Monitors, \(M,N \in \textsc {Mon} \), are syntactically defined by the grammar of Fig. 2. They may reach either of two verdicts, namely detection, \(\checkmark \), or termination, \(\textsf {end}\), denoting an inconclusive verdict. Our setting is a mild generalisation to that in [17] since monitors need to reason about communicated names so as to adequately monitor for piCalculus processes. They are thus equipped with a pattern matching construct (used to observe external actions) and a namecomparison branching construct. The remaining constructs, i.e., external branching and recursion, are standard. Note that, whereas the syntax allows for monitors with free occurrences of monitor variables, monitors are always closed wrt. (value) variables, whereby the outermost occurrence of a variable acts as a binder. E.g., in the monitor (\(x ? c .x ! y .\textsf {if}\; y \!=\! d\,\textsf {then}\, \textsf {end}\;\textsf {else}\; \checkmark \)) pattern \(x ? c \) binds variable x in the continuation whereas pattern \(x ! y \) binds variable y.
The monitor semantics is defined in terms of an LTS (Fig. 2), modeling the analysis of the visible runtime execution of a process. Following [15, 17, 30], in rule mVer verdicts are able to analyse any external action but transition to the same verdict, i.e., verdicts are irrevocable. By contrast, patternguarded monitors only transition when the action matches the pattern, binding pattern variables to the resp. action names, \(\textsf {match}(p,\alpha ) =\sigma \), and substituting them in the continuation, \(M\sigma \); see rule mPat. The remaining transitions are unremarkable.
A monitored system, \(P \triangleleft \, M\), consists of a process, P, instrumented with a monitor, M, analysing its (external) behaviour. Figure 2 defines the instrumentation semantics for configurations, \(I \triangleright P \triangleleft \, M\), i.e., systems augmented with an interface \(I\), where again we assume P is closed and \(\mathbf{fn }(P) \subseteq I\). The LTS semantics follows [7, 17] and relies on the resp. process and monitor semantics of Figs. 1 and 2. In rule iMon, if the process exhibits the external action \(\alpha \)wrt.\(I\), and the monitor can analyse this action, they transition in lockstep in the instrumented system while exhibiting same action. If, however, a process exhibits an action that the monitor cannot analyse, the action is manifested at system level while the monitor is terminated; see rule iTer. Finally, iAsyP and iAsyM allow monitors and processes to transition independently wrt. internal moves, i.e., our instrumentation forces processmonitor synchronisation for external actions only, which constitute our monitorable actions. We note that, as is expected of recognisers, the process drives the behaviour of a monitored system: if the process cannot \(\alpha \)transition, the monitored system cannot \(\alpha \)transition either.
Example 3
Theorem 1
(Definite Verdicts). Open image in new window
4 Monitor Preorders
We can use the formal setting presented in Sect. 3 to develop the monitor preorders discussed in the Introduction. We start by defining the monitoring predicates we expect to be preserved by the preorder; a number of these predicates rely on computations and detected computations, defined below.
Definition 2
One criteria for comparing monitors considers the verdicts reached after observing a specific execution trace produced by the process under scrutiny. The semantics of Sect. 3 assigns a passive role to monitors, prohibiting them from influencing the branching execution of the monitored process. Definition 2 thus differentiates between detected computations, identifying them by the visible trace that is dictated by the process (over which the monitor should not have any control).
Example 4
Example 4 suggests two types of computation detections that a monitor may exhibit.
Definition 3
(Potential and Deterministic Detection). Mpotentially detects for \(I \triangleright P\) along trace \(s\), denoted as \(\textsf {pd}(M,I,P,s)\), iff there exists a detecting \(s\)computation from \(I \triangleright P \triangleleft \, M\). Mdeterministically detects for \(I \triangleright P\) along trace \(s\), denoted as \(\textsf {dd}(M,I,P,s)\), iff all\(s\)computation from \(I \triangleright P \triangleleft \, M\) are detecting.\(\blacksquare \)
Remark 1
If a monitored process cannot produce trace \(s\), i.e., Open image in new window , then \(\textsf {pd}(M,I,P,s)\) is trivially false and \(\textsf {dd}(M,I,P,s)\) is trivially true for any M. \(\blacksquare \)
The detection predicates of Definition 3 induce the following monitor preorders (and equivalences), based on the resp. detection capabilities.
Definition 4
Example 5
As opposed to prior work on monitors [2, 10, 15], the detection predicates in Definition 3 consider monitor behaviour within an instrumented system. Apart from acting as a continuation for the study in [17], this setup also enables us to formally justify subtle monitor orderings, Example 6, and analyse peculiarities brought about by the instrumentation relation, Example 7.
Example 6
Example 7
In (12), monitor \(\varOmega +\textsf {end}\) does not deterministically detect any computation: when composed with an arbitrary \(I \triangleright P\), it clearly can never reach a detection, but it can neither prohibit the process from producing visible actions, as in the case of \(\varOmega \) (see rules mVer, mChR and iMon). Monitor \(\varOmega +\checkmark \) can either behave like \(\varOmega \) or transition to \(\checkmark \) after one external action observed; in both cases, it deterministically detects all \(s \)computation where \(s >0\). The monitor \(\textsf {rec}\,X.(\tau .{X}+\checkmark )\) first silently transitions to \((\tau .{\textsf {rec}\,X.(\tau .{X}+\checkmark )})+\checkmark \) and then either transitions back to the original state or else transitions to \(\checkmark \) with an external action; in either case, when composed with any process \(I \triangleright P\), it also deterministically detects all \(s \)computation for \(s >0\).
In (13), although monitor \(\varOmega +c ! a .{\checkmark }\) potentially detects less computations than \(\varOmega +\checkmark \) (e.g., for \(I=\left\{ c,a,b\right\} \), \(P=c \textsf {!} b . \textsf {nil} \) and \(s =c ! b .\epsilon \), the predicate \(\textsf {pd}(\varOmega +\checkmark , I, P, s)\) holds but \(\textsf {pd}(\varOmega +c ! a .{\checkmark }, I, P, s)\)does not), both deterministically detect the same computations, i.e., all \(s \)computation where \(s >0\). Specifically, if a process being monitored, say \(I \triangleright P\), can produce an action other than \(c ! a \), the instrumentation with monitor \(\varOmega +c ! a .{\checkmark }\) restrains such an action, since the monitor cannot transition with that external action (it can only transition with \(c ! a \)) but, at the same time, it can \(\tau \)transition (see rules iMon and iTrm). \(\blacksquare \)
The preorders in Definition 4 are not as discriminating as one might expect.
Example 8
There are however settings where the equalities established in Example 8 are deemed too coarse. E.g., in (15), whereas monitor \(c ! a .\)\(\textsf {end}\) is innocuous when instrumented with a process, monitor \(c ! a .{\textsf {end}}+c ! a .{\varOmega }\) may potentially change the observed behaviour of the process under scrutiny after the action \(c ! a \) is emitted (by suppressing external actions, as explained in Example 7); a similar argument applies for the monitors in (14). We thus define a third monitor predicate called transparency [5, 15, 28], stating that whenever a monitored process cannot perform an external action, it must be because the (unmonitored) process is unable to perform that action (i.e., the monitoring does not prohibit that action).
Definition 5
Although the preorders in Definitions 4 and 5 are interesting in their own right, we define the following relation as the finest monitor preorder in this paper.
Definition 6
Example 9
We have \(M_\text {any}\not \sqsubseteq M_\text {any}+\varOmega \) because \(M_\text {any}\not \sqsubseteq _\textsf {tr} M_\text {any}+\varOmega \), since \(\lnot \textsf {tr}(M_\text {any}+\varOmega ,I,P,s) \) for \(I=\left\{ c,a\right\} \), \(P=c \textsf {!} a . c \textsf {!} a . \textsf {nil} \) and \(s =c ! a .\epsilon \). Similarly, we also have \(c ! a .{\textsf {end}} \not \sqsubseteq c ! a .{\textsf {end}}+c ! a .{\varOmega }\). \(\blacksquare \)
Inequalities from the preorders of Definitions 4 and 5 are relatively easy to repudiate. For instance, we can use \(P,I\) and \(t \) from Example 4 as counter examples to show that \(\textsf {pd}(M_5,I,P,t)\) and \(\lnot \textsf {pd}(M_3,I,P,t)\), thus disproving\(M_5 \sqsubseteq _\textsf {pd} M_3\). However, it is much harder to show that an inequality from these preorders holds because we need to consider monitor behaviour wrt.all possible processes, interfaces and traces. As shown in Examples 6, 7 and 8, this often requires intricate reasoning in terms of the three LTSs defined in Figs. 1 and 2.
5 Characterisation
We define alternative monitor preorders for which positive statements about their inequalities are easier to establish. The new preorders are defined exclusively in terms of the monitor operational semantics of Fig. 2, as opposed to how they are affected by arbitrary processes as in Definition 6 (which considers also the process and instrumentation LTSs). We show that the new preorders coincide with those in Sect. 4. Apart from equipping us with an easier mechanism for determining the inequalities of Sect. 4, the correspondence results provide further insight into the properties satisfied by the preorders of Definitions 4 and 5.
We start with the potentialdetection preorder. We first define a restricted monitor LTS that disallows idempotent transitions from verdicts, \(w \xrightarrow {\;\alpha \;} w \): these are redundant when considering the monitor operational semantics in isolation. Note, however, that we can still used rule mVer, e.g., to derive \(\checkmark +M \xrightarrow {\;\alpha \;}_r \checkmark \).
Definition 7
(Restricted Monitor Semantics). A derived monitor transition, \( M \xrightarrow {\;\mu \;}_r N \), is the least relation satisfying the conditions \(M \xrightarrow {\;\mu \;} N\) and \( M\ne w \). Open image in new window denotes a transition sequence in the restricted LTS. \(\blacksquare \)
We use the restricted LTS to limit the detecting transition sequences on the left of the implication of Definition 8. However, we permit these transitions to be matched by transition sequences in the original monitor LTS, so as to allow the monitor to the right of the inequality to match the sequence with a prefix of visible actions (which can then be padded by \(\checkmark \xrightarrow {\;\alpha \;} \checkmark \) transitions as required).
Definition 8
Theorem 2
(PotentialDetection Preorders). \( M \sqsubseteq _\textsf {pd} N \quad \text {iff}\quad M \preceq _\textsf {pd} N \)
Example 10
By virtue of Theorem 2, to show that \(\varOmega +c ! a .{\checkmark } \sqsubseteq _\textsf {pd} \varOmega +\checkmark \) from (13) of Example 7 holds, we only need to consider Open image in new window , which can be matched by Open image in new window . Similarly, to show \((x ! a .{\textsf {if}\; x \!=\! c\,\textsf {then}\, \checkmark \;\textsf {else}\; \textsf {end} }) \sqsubseteq _\textsf {pd} \checkmark \), we only need to consider Open image in new window , matched by \(\checkmark \xrightarrow {\;c ! a \;} \checkmark \).\(\blacksquare \)
For the remaining characterisations, we require two divergence judgements.
Definition 9

\( M \uparrow \) denotes that Mdiverges, meaning that it can produce an infinite transition sequence of \(\tau \)actions \(M\xrightarrow {\;\tau \;} M' \xrightarrow {\;\tau \;} M'' \xrightarrow {\;\tau \;} \ldots \)

\( M \Uparrow \) denotes that Mstrongly diverges, meaning that it cannot produce finite transition sequence of \(\tau \)actions \(M\xrightarrow {\;\tau \;} M' \xrightarrow {\;\tau \;} \ldots M'' \not \!\xrightarrow {\;\tau \;}\). \(\blacksquare \)
Lemma 1
\( M \xrightarrow {\;\tau \;}\quad \text {implies}\quad M\ne w \)
The alternative preorder for deterministic detections, Definition 11 below, is based on three predicates describing the behaviour of a monitor M along a trace \(s \). The predicate \(\textsf {blk}(M,s)\) describes the potential for M to block before it can complete trace \(s\). Predicate \(\textsf {fl}(M,s)\) describes the potential for failing after monitoring trace \(s\), i.e., an \(s \)derivative of M reaches a nondetecting state from which no further \(\tau \) actions are possible, or it diverges (implicitly, by Lemma 1, this also implies that the monitors along the diverging sequences are never detecting). Finally \(\textsf {nd}(M,s)\) states the existence of a nondetecting \(s \)derivative of M.
Definition 10
Corollary 1
\( \textsf {blk}(M,s) \quad \text {implies}\quad \forall t \cdot \textsf {blk}(M,s t) \)
Definition 11
Theorem 3
(DeterministicDetection Preorders). \( M \sqsubseteq _\textsf {dd} N \;\text {iff}\; M \preceq _\textsf {dd} N \)
Example 11
 1.We have \(\textsf {blk}(N,s)\) whenever \(s =\alpha s '\) and \(\alpha \ne c ? a \). We have two subcases:

If \(\textsf {match}(x ! b ,\alpha )\) is undefined, we show \(\textsf {blk}(M,\alpha s ')\) by first showing that \(\textsf {blk}(M,\alpha \epsilon )\) and then generalising the result for arbitrary \(s '\) using Corollary 1.

If \(\exists \sigma \cdot \textsf {match}(x ! b ,\alpha )=\sigma \), we show \(\textsf {fl}(M,\alpha s ')\) by first showing \(\textsf {fl}(M,\alpha \epsilon )\) and then generalising the result for arbitrary \(s '\) using Theorem 1.

 2.
For any \(s\), we have \(\textsf {fl}(N,c ? a .s)\): we can show \(\textsf {fl}(M,c ? a .s)\), again using Theorem 1 to alleviate the proof burden.
 3.
For any \(s\), we have \(\textsf {nd}(N,c ? a .s)\): the required proof is analogous to the previous case. \(\blacksquare \)
Example 12
Due to full abstraction (i.e., completeness), we can alternatively disprove \(\checkmark \sqsubseteq _\textsf {dd} \tau .{\checkmark }\) from Example 6 by showing that \(\checkmark \not \preceq _\textsf {dd} \tau .{\checkmark }\): we can readily argue that whereas \(\textsf {nd}(\tau .{\checkmark },\epsilon )\), we cannot show either \(\textsf {nd}(\checkmark ,\epsilon )\) or \(\textsf {blk}(\checkmark ,\epsilon )\). \(\blacksquare \)
Example 13
 1.
For any \(s \ne \epsilon \) we have \(\lnot \textsf {blk}(\varOmega ,s)\), \(\lnot \textsf {blk}(\varOmega +\checkmark ,s)\) and \(\lnot \textsf {blk}(\textsf {rec}\,X.(\tau .{X}+\checkmark ),s)\).
 2.
We only have \(\textsf {fl}(\varOmega ,\epsilon )\), \(\textsf {fl}(\varOmega +\checkmark ,\epsilon )\) and \(\textsf {fl}(\textsf {rec}\,X.(\tau .{X}+\checkmark ),\epsilon )\).
 3.
Similarly, we only have \(\textsf {nd}(\varOmega ,\epsilon )\), \(\textsf {nd}(\varOmega +\checkmark ,\epsilon )\) and \(\textsf {nd}(\textsf {rec}\,X.(\tau .{X}+\checkmark ),\epsilon )\). \(\blacksquare \)
Remark 2
The alternative preorder in Definition 11 can be optimised further using refined versions of the predicates \(\textsf {fl}(M,s)\) and \(\textsf {nd}(M,s)\) that are defined in terms of the restricted monitor transitions of Definition 7, as in the case of Definition 3. \(\blacksquare \)
The alternative transparency preorder, Definition 13 below, is defined in terms of divergence refusals which, in turn, rely on strong divergences from Definition 9. Intuitively, divergence refusals are the set of actions that cannot be performed whenever a monitor reaches a strongly divergent state following the analysis of trace \(s\). These actions turn out to be those that are suppressed on a process when instrumented with the resp. monitor.
Definition 12
Definition 13
Theorem 4
(Transparency Preorders). \( M \sqsubseteq _\textsf {tr} N \quad \text {iff}\quad M \preceq _\textsf {tr} N \)
Example 14
Recall monitors \(c ! a .\)\(\textsf {end}\) and \(c ! a .{\textsf {end}}+c ! a .{\varOmega }\) from (15) of Example 8. The inequality \(c ! a .{\textsf {end}}+c ! a .{\varOmega } \sqsubseteq _\textsf {tr} c ! a .{\textsf {end}}\) follows trivially from Theorem 4, since \(\forall s \cdot \textsf {dref}(c ! a .{\textsf {end}},s)=\emptyset \). The symmetric case, \(c ! a .{\textsf {end}} \sqsubseteq _\textsf {tr} c ! a .{\textsf {end}}+c ! a .{\varOmega }\), can also be readily repudiated by plying Theorem 4. Since \(\textsf {dref}((c ! a .{\textsf {end}}+c ! a .{\varOmega }), c ! a .\epsilon )=\textsc {Act} \) (and \(\textsf {dref}(c ! a .{\textsf {end}},c ! a .\epsilon )=\emptyset \)) we trivially obtain a violation of the set inclusion requirements of Definition 13. \(\blacksquare \)
Example 15
Recall again \(\varOmega +\checkmark \) and \(\textsf {rec}\,X.(\tau .{X}+\checkmark )\) from (12). We can differentiate between these monitors from a transparency perspective, and Theorem 4 permits us to do this with relative ease. In fact, whereas \(\textsf {dref}((\varOmega +\checkmark ),\epsilon )=\textsc {Act} \) (since \(\varOmega +\checkmark \xrightarrow {\;\tau \;} \varOmega \) and \(\textsf {dref}(\varOmega ,\epsilon ) = \textsc {Act} \)) we have \(\textsf {dref}((\textsf {rec}\,X.(\tau .{X}+\checkmark )),\epsilon )=\emptyset \); for all other traces \(s  \ge 1\) we obtain empty divergence refusal sets for both monitors. We thus can positively conclude that \(\varOmega +\checkmark \sqsubseteq _\textsf {tr} \textsf {rec}\,X.(\tau .{X}+\checkmark )\) while refuting \(\textsf {rec}\,X.(\tau .{X}+\checkmark ) \sqsubseteq _\textsf {tr} \varOmega +\checkmark \). \(\blacksquare \)
Definition 14
Theorem 5
(Full Abstraction). \( M \sqsubseteq N \quad \text {iff}\quad M \preceq N \)
6 Conclusion
In runtime verification, threeverdict monitors [2, 10, 15, 17] are often considered, where detections are partitioned into acceptances and rejections. The monitors studied here express generic detections only; they are nevertheless maximally expressive for branchingtime properties [17]. They also facilitate comparisons with other lineartime preorders (see below). We also expect our theory to extend smoothly to settings with acceptances and rejections.
Our potential and deterministic detection preorders are reminiscent of the classical may and must preorders of [13, 23] and, more recently (for the deterministic detection preorder), of the subcontract relations in [3, 21]. However, these relations differ from ours in a number of respects. For starters, the monitor instrumentation relation of Fig. 2 assigns monitors a passive role whereas the parallel composition relation composing processes (servers in [3, 21]) with tests (clients in [3, 21]) invites tests to interact with the process being probed. Another important difference is that testing preorders typically relate processes, whereas our preorders are defined over the adjudicating entities i.e., the monitors. The closest work in this regard is that of [3], where the authors develop a must theory for clients. Still, there are significant discrepancies between this must theory and our deterministic detection preorder (further to the differencies between the detected (monitored) computations of Definition 2 and the successful computations under tests of [3, 13, 23] as outlined above — success in the compliance relation of [21] is even more disparate). Concretely, in our setting we have equalities such as \(c ! a .\checkmark \cong _\textsf {dd} c ! a .\checkmark +c ! b .\textsf {end}\) (see (10) of Example 6), which would not hold in the setting of [3] since their client preorder is sensitive to external choices (\(\sqsubseteq _\textsf {dd}\) is not because monitored executions are distinguished by their visible trace). The two relations are in fact incomparable, since divergent processes are bottom elements in the client must preorder of [3], but they are not in \(\sqsubseteq _\textsf {dd}\). In fact, we have \(\varOmega \not \sqsubseteq _\textsf {dd} \varOmega +\textsf {end}\) in (12) of Example 7 or, more clearly, \(\varOmega \not \sqsubseteq _\textsf {dd} \varOmega +\alpha .\textsf {end}\); at an intuitive level, this is because the instrumentation relation of Fig. 2 prioritises silent actions over external actions that cannot be matched by the monitor.
Transparency is usually a concern for enforcement monitors whereby the visible behaviour of a monitored process should not be modified unless it violates some specified property [5, 15, 28]. We adapted this concept to recognisers, whereby the process behaviour should never be suppressed by the monitor.
To our knowledge, the only body of work that studies monitoring for the piCalculus is [5, 11, 22], and focusses on synthesising adaptation/enforcement monitors from session types. The closest to our work is [5]: their definitions of monitor correctness are however distinctly different (e.g., they are based on branchingtime equivalences) and their decomposition methods for decoupling the monitor analysis from that of processes rely on static typechecking.
Footnotes
Notes
Acknowledgements
The paper benefited from discussions with Luca Aceto, Giovanni Bernardi, Matthew Hennessy and Anna Ingólfsdóttir.
References
 1.Barringer, H., Falcone, Y., Havelund, K., Reger, G., Rydeheard, D.E.: Quantified event automata: Towards expressive and efficient runtime monitors. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 68–84. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 2.Bauer, A., Leucker, M., Schallhart, C.: Runtime verification for LTL and TLTL. TOSEM 20(4), 14 (2011)CrossRefGoogle Scholar
 3.Bernardi, G., Hennessy, M.: Mutually testing processes. LMCS 11(2:1), 1–23 (2015)MathSciNetzbMATHGoogle Scholar
 4.Bielova, N., Massacci, F.: Do you really mean what you actually enforced? Edited automata revisited. Int. J. Inf. Secur. 10(4), 239–254 (2011)CrossRefGoogle Scholar
 5.Bocchi, L., Chen, T.C., Demangeon, R., Honda, K., Yoshida, N.: Monitoring networks through multiparty session types. In: Beyer, D., Boreale, M. (eds.) FORTE 2013 and FMOODS 2013. LNCS, vol. 7892, pp. 50–65. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 6.Cassar, I., Francalanza, A.: On synchronous and asynchronous monitor instrumentation for actor systems. FOCLASA 175, 54–68 (2014)Google Scholar
 7.Kane, A., Chowdhury, O., Datta, A., Koopman, P.: A case study on runtime monitoring of an autonomous research vehicle (ARV) system. In: Bartocci, E., et al. (eds.) RV 2015. LNCS, vol. 9333, pp. 102–117. Springer, Heidelberg (2015). doi: 10.1007/9783319238203_7 CrossRefGoogle Scholar
 8.Cesarini, F., Thompson, S.: Erlang Programming. O’Reilly, Sebastopol (2009)zbMATHGoogle Scholar
 9.Chen, F., Roşu, G.: MOP: An efficient and generic runtime verification framework. In: OOPSLA, pp. 569–588. ACM, (2007)Google Scholar
 10.Cini, C., Francalanza, A.: An LTL proof system for runtime verification. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 581–595. Springer, Heidelberg (2015)Google Scholar
 11.Coppo, M., DezaniCiancaglini, M., Venneri, B.: Selfadaptive monitors for multiparty sessions. In: PDP, pp. 688–696. IEEE Computer Society (2014)Google Scholar
 12.D’Angelo, B., Sankaranarayanan, S., Sánchez, C., Robinson, W., Finkbeiner, B., Sipma, H.B., Mehrotra, S., Manna, Z.: LOLA: Runtime monitoring of synchronous systems. In: TIME, IEEE (2005)Google Scholar
 13.De Nicola, R., Hennessy, M.C.B.: Testing equivalences for processes. TCS 34(1–2), 83–133 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
 14.Decker, N., Leucker, M., Thoma, D.: jUnit\(^\text{ RV }\)–adding runtime verification to junit. In: Brat, G., Rungta, N., Venet, A. (eds.) NFM 2013. LNCS, vol. 7871, pp. 459–464. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 15.Falcone, Y., Fernandez, J.C., Mounier, L.: What can you verify and enforce at runtime? STTT 14(3), 349–382 (2012)CrossRefGoogle Scholar
 16.Formal Systems Laboratory. Monitor Oriented Programming. University of Illinois at Urbana Champaign. http://fsl.cs.illinois.edu/index.php/MonitoringOriented_Programming
 17.Kane, A., Chowdhury, O., Datta, A., Koopman, P.: A case study on runtime monitoring of an autonomous research vehicle (ARV) system. In: Bartocci, E., et al. (eds.) RV 2015. LNCS, vol. 9333, pp. 102–117. Springer, Heidelberg (2015). doi: 10.1007/9783319238203_7 CrossRefGoogle Scholar
 18.Francalanza, A., Gauci, A., Pace, G.J.: Distributed system contract monitoring. JLAP 82(5–7), 186–215 (2013)MathSciNetzbMATHGoogle Scholar
 19.Francalanza, A., Hennessy, M.: A theory for observational fault tolerance. JLAP 73(1–2), 22–50 (2007)MathSciNetzbMATHGoogle Scholar
 20.Francalanza, A., Seychell, A.: Synthesising correct concurrent runtime monitors. FMSD 46(3), 226–261 (2015)zbMATHGoogle Scholar
 21.Castagna, G., Gesbert, N., Padovani, L.: A theory of contracts for web services. ACM Trans. Program. Lang. Syst. 31(5), 1–61 (2009)CrossRefzbMATHGoogle Scholar
 22.Giusto, C.D., Perez, J.A.: Disciplined structured communications with disciplined runtime adaptation. Sci. Comput. Program. 97(2), 235–265 (2015)CrossRefGoogle Scholar
 23.Hennessy, M.: Algebraic Theory of Processes. MIT Press, Cambridge (1988)zbMATHGoogle Scholar
 24.Hennessy, M.: A Distributed PiCalculus. Cambridge University Press, Cambridge (2007)CrossRefzbMATHGoogle Scholar
 25.Kim, M., Viswanathan, M., Kannan, S., Lee, I., Sokolsky, O.: JavaMaC: A runtime assurance approach for Java programs. FMSD 24(2), 129–155 (2004)zbMATHGoogle Scholar
 26.Kane, A., Chowdhury, O., Datta, A., Koopman, P.: A case study on runtime monitoring of an Autonomous Research Vehicle (ARV) system. In: Bartocci, E., et al. (eds.) RV 2015. LNCS, vol. 9333, pp. 102–117. Springer, Heidelberg (2015). doi: 10.1007/9783319238203_7 CrossRefGoogle Scholar
 27.Leucker, M., Schallhart, C.: A brief account of runtime verification. JLAP 78(5), 293–303 (2009)zbMATHGoogle Scholar
 28.Ligatti, J., Bauer, L., Walker, D.: Edit automata: enforcement mechanisms for runtime security policies. Int. J. Inf. Secur. 4(1–2), 2–16 (2005)CrossRefGoogle Scholar
 29.Milner, R.: Communication and Concurrency. PrenticeHall Inc, Upper Saddle River (1989)zbMATHGoogle Scholar
 30.Roşu, G., Havelund, K.: Rewritingbased techniques for runtime verification. Autom. Softw. Engg. 12(2), 151–197 (2005)CrossRefGoogle Scholar
 31.Sangiorgi, D., Walker, D.: PICalculus: A Theory of Mobile Processes. Cambridge University Press, Cambridge (2001)zbMATHGoogle Scholar
 32.Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3(1), 30–50 (2000)CrossRefGoogle Scholar
 33.Sen, K., Vardhan, A., Agha, G., Rosu, G.: Efficient decentralized monitoring of safety in distributed systems. In: ICSE, pp. 418–427. IEEE (2004)Google Scholar
 34.Verissimo, P., Rodrigues, L.: Distributed Systems for System Architects. Kluwer Academic Publishers, Norwell (2001)CrossRefzbMATHGoogle Scholar