Abstract
Verification of componentbased systems still suffers from limitations such as state space explosion since a large number of different components may interact in a heterogeneous environment. These limitations entail the need for complementary verification methods such as runtime verification. Runtime verification is a dynamic analysis technique and is prone to scalability. In this paper, we integrate runtime verification into the BIP (Behavior, Interaction and Priority) framework. BIP is a powerful and expressive componentbased framework for the formal construction of heterogeneous systems. Our method augments BIP systems with monitors to check specifications at runtime. This method has been implemented in RVBIP, a prototype tool that we used to validate the whole approach on a robotic application.
This is a preview of subscription content, log in to check access.
Notes
 1.
Consequently, it does not forbid to have several assignments to a variable in such sequences. In such a case, the last assignment to this variable determines the final value of the variable.
 2.
The BIP engine implementing this semantics chooses one interaction at random, when faced with several enabled interactions.
 3.
Otherwise, some simplification of the specification shall be performed beforehand. For instance, such simplification should rule out events of the form \(a \vee \lnot a\) where \(a\in \text{ Atom }\).
 4.
This is a reasonable and usual hypothesis in runtime verification since one expects to characterize the behavior of an implementation in a deterministic way. Moreover, these two constraints are easily and naturally ensured by a monitor generation tool using specification written in a higherlevel formalism as input. Finally, note that readiness corresponds to the standard concept of completeness in automata theory.
 5.
This event is unique because of determinism (see Definition 13).
 6.
There are some approaches proposing a formal semantics of aspectoriented programming, but these approaches work mainly on abstract models of the underlying programming language. Moreover, to the best of our knowledge, no RV framework has proposed a formalization of its instrumentation process.
 7.
Because we use as input a monitor specified as a finitestate machine.
 8.
Otherwise the lemma holds vacuously.
References
 1.
Bliudze, S., Sifakis, J.: A notion of glue expressiveness for componentbased systems. In: van Breugel, F., Chechik, M. (eds.) Proceedings of the 19th International Conference on Concurrency Theory, CONCUR: Volume 5201 of Lecture Notes in Computer Science, pp. 508–522. Springer, New York (2008)
 2.
Runtime Verification. http://www.runtimeverification.org (2001–2012)
 3.
Bauer, A., Leucker, M., Schallhart, C.: Comparing LTL semantics for runtime verification. J. Logic Comput. 20, 651–674 (2010)
 4.
Falcone, Y., Fernandez, J.C., Mounier, L.: Runtime verification of safetyprogress properties. In: Bensalem, S., Peled, D. (eds.) Proceedings of the 9th International Workshop on Runtime Verification, RV: Selected Papers. Volume 5779 of LNCS, pp. 40–59. Springer, Berlin (2009)
 5.
Falcone, Y., Jaber, M., Nguyen, T.H., Bozga, M., Bensalem, S.: Runtime verification of componentbased systems. In: Barthe, G., Pardo, A., Schneider, G. (eds.) Proceedings of the 9th International Conference on Software Engineering and Formal Methods, SEFM: Volume 7041 of LNCS, pp. 204–220. Springer, Berlin (2011)
 6.
Francalanza, A., Gauci, A., Pace, G.J.: Distributed system contract monitoring. In: Pimentel, E., Valero, V. (eds.) Proceedings of the Fifth Workshop on Formal Languages and Analysis of ContractOriented Software (FLACOS 2011). Volume 68 of EPTCS, pp. 23–37 (2011)
 7.
Bauer, A.K., Falcone, Y.: Decentralised LTL monitoring. In: Giannakopoulou, D., Méry, D. (eds.) Proceedings of the 18th International Symposium on Formal Methods, FM: Volume 7436 of LNCS, pp. 85–100. Springer, Berlin (2012)
 8.
Bonakdarpour, B., Bozga, M., Jaber, M., Quilbeuf, J., Sifakis, J.: From highlevel componentbased models to distributed implementations. In Carloni, L.P., Tripakis, S. (eds.) Proceedings of the 10th International conference on Embedded software (EMSOFT 2010), pp. 209–218. ACM (2010)
 9.
Bozga, M., Jaber, M., Sifakis, J.: Sourcetosource architecture transformation for performance optimization in BIP. In Carloni, L., Thiele, L. (eds.) Proceedings of the IEEE 4th International Symposium on Industrial Embedded Systems (SIES 2009), pp. 152–160. IEEE (2009)
 10.
Basu, A., Bozga, M., Sifakis, J.: Modeling heterogeneous realtime components in BIP. In: Pandya, P., Hung, D.V. (eds.) Proceedings of the 4th IEEE International Conference on Software Engineering and Formal Methods (SEFM 2006), pp. 3–12. IEEE Computer Society (2006)
 11.
Bliudze, S., Sifakis, J.: The algebra of connectors—structuring interaction in BIP. IEEE Trans. Comput. 57, 1315–1330 (2008)
 12.
d’Amorim, M., Roşu, G.: Efficient monitoring of \(\omega \)languages. In: Etessami, K., Rajamani, S.K. (eds.) Proceedings of 17th International Conference on Computeraided Verification (CAV’05). Volume 3576 of LNCS, pp. 364–378. Springer, Berlin (2005)
 13.
Stolz, V.: Temporal assertions with parametrised propositions. In: Sokolsky, O., Tasiran, S. (eds.) 7th International Workshop on Runtime Verification, RV: Revised Selected Papers. Volume 4839 of LNCS, pp. 176–187. Springer, Berlin (2007)
 14.
Barringer, H., Rydeheard, D., Havelund, K.: Rule systems for runtime monitoring: from EAGLE to RuleR. J. Logic Comput. 20, 675–706 (2010)
 15.
Meredith, P., Jin, D., Griffith, D., Chen, F., Roşu, G.: An overview of the MOP runtime verification framework. Int. J. Softw. Tools Technol. Transf. (STTT) (2011), 1–41. doi:10.1007/s1000901101986
 16.
Pnueli, A., Zaks, A.: PSL model checking and runtime verification via testers. In: Misra, J., Nipkow, T., Sekerinski, E. (eds.) Proceedings of the 14th International Symposium on Formal Methods, FM: Volume 4085 of LNCS, pp. 573–586. Springer, Berlin (2006)
 17.
Falcone, Y., Fernandez, J.C., Mounier, L.: What can you verify and enforce at runtime? Softw. Tools Technol. Transf. 14, 349–382 (2012)
 18.
Bauer, A., Leucker, M., Schallhart, C.: Runtime verification for LTL and TLTL. ACM Trans. Softw. Eng. Methodol. 20, 14 (2011)
 19.
Havelund, K.: Runtime verification of C programs. In: Suzuki, K., Higashino, T., Ulrich, A., Hasegawa, T. (eds.) Proceedings of the 20th IFIP TC 6/WG 6.1 International Conference on Testing of Software and Communicating Systems, TestCom: and 8th International Workshop on Formal Aspects of TESting (TestCom/FATES 2008). Volume 5047 of LNCS, pp. 7–22. Springer, Berlin (2008)
 20.
Fleury, S., Herrb, M., Chatila, R.: GenoM: A tool for the specification and the implementation of operating modules in a distributed robot architecture. In: Electrical, I., Engineer, E. (eds.) Proceedings of Intelligent Robots and Systems (IROS 97), pp. 842–848. IEEE (1997)
 21.
Bensalem, S., Gallien, M., Ingrand, F., Kahloul, I., Nguyen, T.H.: Toward a more dependable software architecture for autonomous robots. IEEE Robot. Autom. Mag. Spec. Issue Soft. Eng. Robot. 16, 67–77 (2008)
 22.
Umrigar, Z.D., Pitchumani, V.: Formal verification of a realtime hardware design. In: Radke, C.E. (ed.) Proceedings of the 20th Design Automation Conference (DAC ’83), pp. 221–227. IEEE Press, Piscataway (1983)
 23.
Queille, J.P., Sifakis, J.: Specification and verification of concurrent systems in CESAR. In DezaniCiancaglini, M., Montanari, U. (eds.) Proceedings of the 5th International Symposium on Programming. Volume 137 of LNCS, pp. 337–351 (1982)
 24.
Clarke, E.M., Emerson, E.A.: Synthesis of synchronisation skeletons for branching time temporal logic. In: Kozen, D. (ed.) Logic of Programs: Workshop. Volume 131 of LNCS (1981)
 25.
Clarke, E.M., Long, D.E., McMillan, K.L.: Compositional model checking. In: Parikh, R. (ed.) Proceedings of the Fourth Annual Symposium on Logic in Computer Science, pp. 353–362. IEEE Computer Society Press (1989)
 26.
Chang, E., Manna, Z., Pnueli, A.: Compositional verification of realtime systems. In: Abramsky, S., (ed.) Symposium on Logic in Computer Science, IEEE (1994)
 27.
Long, D.E.: Model Checking, Abstraction, and Compositional Reasoning. Ph.D. thesis, Carnegie Mellon (1993)
 28.
Bensalem, S., Bozga, M., Nguyen, T.H., Sifakis, J.: Compositional verification for componentbased systems and application. Softw. J. Spec. Issue Autom. Compos. Verif. 4, 181–193 (2010)
 29.
Bensalem, S., Bozga, M., Legay, A., Nguyen, T.H., Sifakis, J., Yan, R.: Incremental componentbased construction and verification using invariants. In: Bloem, R., Sharygina, N. (eds.) Proceedings of 10th International Conference on Formal Methods in ComputerAided Design (FMCAD 2010), pp. 257–256. IEEE (2010)
 30.
Meyer, B.: Applying “design by contract”. Computer 25, 40–51 (1992)
 31.
Abadi, M., Lamport, L.: Composing specifications. ACM Trans. Program. Lang. Syst. 15, 73–132 (1993)
 32.
Hafaiedh, I.B., Graf, S., Quinton, S.: Reasoning about safety and progress using contracts. In Dong, J.S., Zhu, H., eds.: Proceedings of the 12th International Conference on Formal Engineering Methods, ICFEM: Volume 6447 of LNCS, pp. 436–451. Springer, Berlin (2010)
 33.
Barringer, H., Goldberg, A., Havelund, K., Sen, K.: Rulebased runtime verification. In: Steffen, B., Levi, G. (eds.) Proceedings of the 5th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI: Volume 2937 of LNCS, pp. 44–57. Springer, Berlin (2004)
 34.
Barringer, H., Groce, A., Havelund, K., Smith, M.: Formal analysis of log files. J. Aerospace Comput. Inf. Commun (2010)
 35.
Barringer, H., Havelund, K.: TraceContract: A Scala DSL for trace analysis. In: Butler, M., Schulte, W. (eds.) Proceedings of the 17th International Symposium on Formal Methods, FM: Volume 6664 of LNCS, pp. 57–72. Springer, Berlin (2011)
 36.
Bacchus, F., Kabanza, F.: Planning for temporally extended goals. In: Clancey, W.J., Weld, D.S. (eds.) AAAI/IAAI, vol. 2, AAAI Press/The MIT Press, pp. 1215–1222 (1996)
 37.
Allan, C., Avgustinov, P., Christensen, A.S., Hendren, L., Kuzins, S., Lhoták, O., de Moor, O., Sereni, D., Sittampalam, G., Tibble, J.: Adding trace matching with free variables to AspectJ. SIGPLAN Not. 40, 345–364 (2005)
 38.
Stolz, V., Bodden, E.: Temporal assertions using AspectJ. In: Havelund, K., Núñez, M., Rosu, G., Wolff, B. (eds.) Proceedings of the First combinned International Workshops on Formal Approaches to Software Testing and Runtime Verification (FATES/RV 06). Volume 4262 of LNCS, pp. 109–124. Springer, Berlin (2006)
 39.
Colombo, C., Pace, G.J., Schneider, G.: LARVA – safer monitoring of realtime Java programs (tool paper). In: Hung, D.V., Krishnan, P. (eds.) Proceedings of the 7th IEEE International Conference on Software Engineering and Formal Methods (SEFM 2009), pp. 33–37. IEEE Computer Society (2009)
 40.
Colombo, C., Gauci, A., Pace, G.J.: LarvaStat: Monitoring of statistical properties. In: Barringer, H., Falcone, Y., Finkbeiner, B., Havelund, K., Lee, I., Pace, G.J., Rosu, G., Sokolsky, O., Tillmann, N. (eds.) Proceedings of the 1st International Conference on Runtime Verification (RV 10). Volume 6418 of LNCS, pp. 480–484. Springer, Berlin (2010)
 41.
Rosu, G., Chen, F.: Semantics and algorithms for parametric monitoring. Logic. Methods Comput. Sci. 8 (2012)
 42.
Kähkönen, K., Lampinen, J., Heljanko, K., Niemelä, I.: The LIME interface specification language and runtime monitoring tool. In: Bensalem, S., Peled, D. (eds.) Proceedings of the 9th International Workshop on Runtime Verification, RV: Selected Papers. Volume 5779 of LNCS, pp. 93–100. Springer, Belrin (2009)
 43.
Dormoy, J., Kouchnarenko, O., Lanoix, A.: Using temporal logic for dynamic reconfigurations of components. In: Barbosa, L.S., Lumpe, M. (eds.) Proceedings of the 7th International Workshop on Formal Aspects of Component Software, FACS: Volume 6921 of LNCS, pp. 200–217. Springer, Berlin (2010)
 44.
Bonakdarpour, B., Bozga, M., Jaber, M., Quilbeuf, J., Sifakis, J.: Automated conflictfree distributed implementation of componentbased models. In: Fummi, F., Hsieh, H. (eds.) Proceedings of the IEEE 5th International Symposium on Industrial Embedded Systems (SIES 2010), pp. 108–117. IEEE (2010)
 45.
Bodden, E., Lam, P., Hendren, L.J.: Clara: A framework for partially evaluating finitestate runtime monitors ahead of time. In: Barringer, H., Falcone, Y., Finkbeiner, B., Havelund, K., Lee, I., Pace, G.J., Rosu, G., Sokolsky, O., Tillmann, N. (eds.) Proceedings of the 1st International Conference on Runtime Verification (RV 10). Volume 6418 of LNCS, pp. 183–197. Springer, Berlin (2010)
 46.
Bozga, M., Jaber, M., Maris, N., Sifakis., J.: Modeling dynamic architectures using DyBIP. In: Gschwind, T., Paoli, F.D., Gruhn, V., Book, M. (eds.) Proceedings of the 11th International Conference on Software Composition, SC: Volume 7306 of LNCS, pp. 1–16. Springer, Berlin (2012)
 47.
Falcone, Y.: You should better enforce than verify. In: Barringer, H., Falcone, Y., Finkbeiner, B., Havelund, K., Lee, I., Pace, G.J., Rosu, G., Sokolsky, O., Tillmann, N. (eds.) Proceedings of the 1st International Conference on Runtime Verification (RV 10). Volume 6418 of LNCS, pp. 89–105. Springer, Berlin (2010)
 48.
Milner, R.: Communication and concurrency. Prentice Hall International (UK) Ltd., Hertfordshire (1995)
Acknowledgments
The authors would like to warmly thank the anonymous reviewers for their insightful remarks.
Author information
Additional information
Communicated by Dr. Gerardo Schneider, Gilles Barthe, and Alberto Pardo.
Appendix A: A proof of correctness of the proposed approach
Appendix A: A proof of correctness of the proposed approach
In order to prove the correctness of our approach, we proceed according to the following stages:

1.
Introducing a suitable abstraction of the system. In this abstraction, some data is discarded to focus only on the behavior of the system (Sect. A.1).

2.
Introducing some intermediate definitions and lemmas (Sect. A.2).

3.
Proving that the initial system and the instrumented system are observationally equivalent by showing a weak bisimulation between them. This is the cornerstone of the correctness of our approach in the sense that it demonstrates that our transformation preserves the initial behavior of the system up to some actions of the monitor. This result is proved in Sect. A.3.

4.
Proving that our transformation correctly transforms the initial system (Sect. A.4), using some intermediate lemmas from previous stages.
In the following proofs, we will consider several mathematical objects in order to prove the correctness of our framework:

an abstract monitor \(\mathcal{A }\!=\!(\Theta ^\mathcal{A },{\theta _{{\scriptscriptstyle \mathrm {init}}}^\mathcal{A }},\varSigma ,\stackrel{}{\longrightarrow }_\mathcal{A }, {\mathbb{B }_{4}},{ver}^\mathcal{A })\);

a BIP monitor \(M^\mathcal{A }=(P,L,T,X,\{g_\tau \}_{\tau \in T},\{f_{ \tau }\}_{\tau \in T})\) generated form \(\mathcal{A }\), i.e., \(M^{\mathcal{A }}={\textit{BuildMon}}(\mathcal{A })\);

a composite component \(B=\pi (\varGamma (\{B_i\}_{i\in [1,n]}))\) along with its behavior \(C=(Q,A,\stackrel{}{\longrightarrow })\);

the instrumented composite component \(B^m=\pi ^m(\varGamma ^m(\{B_i^m\}_{i\in [1,n]}\cup \{M^{A}\}))\) along with its behavior \(C^m=(Q^m,A^m,\stackrel{}{\longrightarrow }_m)\). \(B^m\) is obtained from \(B\) by following the procedure described in Sect. 5.
A.1 Abstracting data
With the objective of simplifying the following proofs, we introduce an abstraction consisting in analyzing the behavior of the involved components without considering some of the data. This abstraction is possible as one can notice that our transformations modify the values of some newly introduced variables but preserve the values of the variables that were present in the initial system.
Recall that a state of an atomic component is defined as a threetuple \(q=(l,v,p)\) where \(l \in L\) is the control state, \(v \in [X \rightarrow \mathrm{Data}]\) is a valuation of the variables \(X\) of the atomic component, \(p \in P\) is the port labelling the last executed transition. To simplify proofs, we introduce an abstraction that consists in omitting the variables defined in the original atomic components. This abstraction is obtained by discarding some functions and guards defined in the connectors and transitions. Moreover, a state of an atomic component \(q=(l,v,p)\) for some \(l\in L,v\in [X\rightarrow \mathrm{Data}]\)) and \(p \in P\) reduces to the actual control state \(l\) in the abstracted semantics. Consequently, a (global) state of \(B\) is a tuple consisting of the local states of its constituent atomic components. That is, the behavior \(C\) of the composite component \(B=\varGamma (\{B_1,\ldots ,B_n\})\) is a transition system \((Q,\upgamma , \stackrel{}{\longrightarrow })\), where \(Q=Q_1\times \cdots \times Q_n\) (with \(\forall i\in [1,n]: Q_i = B_i.L\)) and \(\stackrel{}{\longrightarrow }\) is the least set of transitions satisfying the rule:
Note that since data is abstracted, an interaction \(\upgamma \) now consists of a set of ports \(\mathcal{P }_\upgamma \) and the function \(t\) specifying the types of ports. The notion of execution (run) of composite components, in this abstracted semantics, transposes easily from Definition 14 to abstract behaviors. Moreover, in the following, to lighten notation, given a state \(q\in Q\) we do not make the distinction between \({[\![q]\!]}\) and \(q\).
A.2 Preliminary definitions and lemmas
We recall and introduce some definitions and intermediate results on our transformations that will be used when proving our central result in Sect. A.3.
Observational equivalence and bisimulation. Let us recall the notion of observational equivalence of two transition systems. It is based on the usual definition of weak bisimilarity [48], where \(\beta \) and \(\beta \)transitions are considered unobservable.
Definition 21
(Weak simulation) Given two transition systems \(S_1 = (Q_1,P_1 \cup {\{\beta \}},\stackrel{}{\longrightarrow }_1)\) and \(S_2 = (Q_2 ,P_2 \cup {\{\beta \}},\stackrel{}{\longrightarrow }_2)\), the system \(S_1\) weakly simulates the system \(S_2\), if there is a relation \(R \subseteq Q_{1} \times Q_{2}\) such that the two following conditions hold:

1.
\(\forall (q,r) \in R, \forall a \in P:\ q \stackrel{a}{\longrightarrow }_A q^{\prime } \implies \exists r^{\prime }:\ (q^{\prime },r^{\prime }) \in R \wedge r \stackrel{\beta ^*\cdot a\cdot \beta ^*}{\longrightarrow }_B r^{\prime }\), and

2.
\(\forall (q,r) \in R:\ q \stackrel{\beta }{\longrightarrow }_A q^{\prime } \implies \exists r^{\prime }:\ (q^{\prime },r^{\prime }) \in R \wedge r \stackrel{\beta ^*}{\longrightarrow }_B r^{\prime }\)
Equation 1. says that if a state \(q\) simulates a state \(r\) and if it is possible to perform \(a\) from \(q\) to end in a state \(q^{\prime }\), then there exists a state \(r^{\prime }\) simulated by \(q^{\prime }\) such that it is possible to go from \(r\) to \(r^{\prime }\) by performing some unobservable actions, the action \(a\), and then some unobservable actions. Equation 2. says that if a state \(q\) simulates a state \(r\) and it is possible to perform an unobservable action from \(q\) to reach a state \(q^{\prime }\), then it is possible to reach a state \(r^{\prime }\) by a sequence of unobservable actions such that \(q^{\prime }\) simulates \(r^{\prime }\).
In that case, we say that the relation \(R\) is a weak simulation over \(S_1\) and \(S_2\) or equivalently that the states of \(S_1\) are similar to the states of \(S_2\). Similarly, a weak bisimulation over \(S_1\) and \(S_2\) is a relation \(R\) such that \(R\) and \(R^{1}{\stackrel{\mathrm{def}}{=}}\{(q_2,q_1)\mid (q_1,q_2)\in R\}\) are both weak simulations. In this latter case, we say that \(S_1\) and \(S_2\) are observationally equivalent and we write \(S_1 \sim S_2\).
System stability. We define now a notion of system stability. Intuitively, a system will be unstable when the system has sent some event to the monitor and the monitor is currently processing this event. Below, we exhibit some properties of our transformed system related to stability.
Following Definition 20, the set \(A^m\) of interactions of \(B^m\) can be partitioned into (1) the set \(A\) of initial interactions (present in the initial composite component), (2) the set \(A^1= \mathcal{I }(B^m.\upgamma _1)\) of interactions used by the monitor to observe the behavior of the system, and (3) the set \(A^2= \mathcal{I }(B^m.\upgamma _2)\) of internal interactions of the monitor to move to the next state. We have \(A^m = A \cup A^{1} \cup A^{2}\). Moreover, the pairwise intersection of \(A,A^1,A^2\) is empty. Observational equivalence considers that all interactions in \(A^1 \cup A^2\) are labeled by unobservable events, denoted by \(\beta \).
Definition 22
(Stable) Given a state \(q^m=(q_1^m, \ldots , q_n^m, {q_{{ mon}}}) \in Q^m\), the predicate \({ is\_stable}\in [Q^m\rightarrow \{\mathtt{true },\mathtt{false }\}]\) is defined as follows:
A state of a composite component, consisting of an \(n\)tuple of the state of some atomic components, is stable if each of the \(n\) states of the atomic component belongs to the uninstrumented system. That is, the constituting local states were not introduced by the transformation proposed in Definition 18.
We now introduce the notion of state stabilization. Stabilizing a state consists in either doing nothing if this state is already stable or returning the next stable state reached by the system.
Definition 23
(State stabilization) Let \(q^m=(q_1^m,\ldots ,q_n^m, {q_{{ mon}}}) \in Q^m\) be a state, the function \({ stable}: Q^m \rightarrow Q\) is defined as follows: \({ stable}(q^m) = q\), where \(q = ({ stable}_1(q_1^m), \ldots , { stable}_n(q_n^m))\), where the intermediate functions \({ stable}_i\in [B_i^m.L \rightarrow B_i.L]\), for \(i\in [1,n]\), are defined as follows:
Intermediate lemmas. We now propose some intermediate results characterizing the status of the global system w.r.t. the notion of stable states and stabilization. The first lemma is a direct consequence of the definition of the predicate is_stable and the notion of stabilization.
Lemma
For a given state \(q^m = (q_1^m,\ldots ,q_n^m,{q_{{ mon}}})\), we have \({ is\_stable}(q^m) \Leftrightarrow { stable}(q^m) = (q_1^m,\ldots ,q_n^m)\).
The following lemma states that when the system is in an unstable state, i.e., some constituting atomic components have performed an instrumented transition, then the arriving state is such that the monitor can perform a transition labeled by \(p_m\) (and thus receive an environment from the components).
Lemma 2
(When the system is not stable the monitor waits for the system) For every state \(q^m=(q_1^m,\ldots ,q_n^m, {q_{{ mon}}}) \in Q^m\), the following property holds
where \(\stackrel{}{\longrightarrow }_{M^\mathcal{A }}\) is the transition relation of the monitor and \(p^m\) is the port used by components to communicate with the monitor (see Definition 18).
Proof
We distinguish two cases according to whether \(q^m\) is the initial state of the system or not. First, if \(q^m\) is the initial state of the system, then from Definition 19 we have \({q_{{ mon}}}\stackrel{p^m}{\longrightarrow }_{M^\mathcal{A }}\). Second, if \(q^m\) is not the initial state of the system, let \(q^{\prime m}=(q_1^{\prime m},\ldots ,q_n^{\prime m}, {q_{{ mon}}}^{\prime })\) be its predecessor and \(a\) be the interaction leading to \(q^m\), that is, \(q^{\prime m} \stackrel{a}{\longrightarrow }_m q^m\). The interaction \(a\) belongs either to \(A,A^1\), or \(A^2\) (where \(\{A, A_1,A_2\}\) is the partition of the interactions of the instrumented components as defined in the paragraph system stability):

If \(a \in A\), then the state of the monitor at state \(q^{\prime m}\) is equal to the state of the monitor at state \(q^m\). Indeed, the interactions in \(\mathcal{I }(B^m.\upgamma )\) consist only of the ports of the atomic components \(\{B_i \mid i\in [1,n]\}\). Since the interaction defined by \(\mathcal{I }(B^m.\upgamma _2)\) has more priority than the interactions in \(\mathcal{I }(B^m.\upgamma )\), then necessarily in the current local state \({q_{{ mon}}}\) of the monitor, it is not possible to fire a transition with \({p_{{ intern}}}\) (i.e., \({q_{{ mon}}}\stackrel{M^\mathcal{A }.{p_{{ intern}}}}{\not \longrightarrow }_{M^\mathcal{A }}\)). Otherwise the interaction \(\{{p_{{ intern}}}\}\) would be executed since the interaction defined by \(\mathcal{I }(B^m.\upgamma _2)\) consists only of the port \(M^A.{p_{{ intern}}}\), such an interaction has more priority than any other existing interaction in the system, and such an interaction would be enabled because of readiness.

If \(a \in A^1\), then \(a \subseteq \bigcup _{i=1}^n\{B_i^m.p^m\}\). Using Definition 18 with maximal progress (Definition 9) ensures that from the local states \(q_i^m\), the port \(p^m\) is not enabled for all \(i \in [1,n]\). Hence, we have \(\forall i \in [1,n]: q^m_i \in B_i.L\), that is, \({ is\_stable}(q^m)\).

If \(a \in A^2\), then \({q_{{ mon}}}^{\prime }\stackrel{{p_{{ intern}}}}{\longrightarrow }_{M^\mathcal{A }}\). Thus, the fact that \({q_{{ mon}}}\stackrel{p^m}{\longrightarrow }_{M^\mathcal{A }}\) follows directly from Definition 19.
\(\square \)
Lemma 3
(After an unstable state the system stabilizes) Given a run \(q^0\cdot q^1 \cdots q^s\) of \(B^m\) such that \(q^i \stackrel{a_i}{\longrightarrow }_{m} q^{i+1}\) holds for all \(i\in [0,s1]\), we have
Proof
Let us consider \(q^i = (q^i_1,\ldots ,q^i_n,{q_{{ mon}}})\) a non stable state (i.e., \(\lnot { is\_stable}(q^i)\)) of the run with \(i\in [0,s1]\) (hence \(q^i\) is not the last state^{Footnote 8}). Let \(q^{i+1}= (q^{i+1}_1,\ldots ,q^{i+1}_n,q^{\prime }_{mon})\) be the successor state of \(q^i\) in the run. Lemma 2 guarantees that the monitor is able to perform a transition labeled by \(p_m\) in \(q^i\), that is, \({q_{{ mon}}}\stackrel{p^m}{\longrightarrow }_{M^\mathcal{A }}\). Let us consider \(Q_u = \{q^i_j \mid q^i_j \notin B_j.L\}\) be the set of locally unstable states. As \(q^i\) is not stable, \(Q_u\) is not empty. The set of possible interactions is the set of subsets of \(\{B^m_j.p^m \mid q^i_j \in Q_u\} \cup \{M^\mathcal{A }.p^m\}\). Indeed, observe that first, these interactions have more priority than the interactions in \(\mathcal{I }(B^m.\upgamma )\), and second that the monitor is ready \({q_{{ mon}}}\stackrel{p^m}{\longrightarrow }_{M^\mathcal{A }}\) (it is not possible to execute any interaction in \(\mathcal{I }(B^m.\upgamma _2)\)). Moreover, maximal progress (Definition 9) guarantees that the executed interaction is \(\{B^m_j.p^m \mid q^m_j \in Q_u\} \cup \{M^\mathcal{A }.p^m\}\). In turn, Definition 18 ensures that from all local states \(q_j^{i+1},j \in [1,n]\), the port \(p^m\) is not enabled. Thus, we have \({ is\_stable}(q^{i+1})\).\(\square \)
A.3 Observational equivalence between the original and transformed BIP models
We are now ready to state and prove our central result.
Proposition 1
The noninstrumented system is bisimilar to the instrumented system where interactions with the monitor and internal interactions of the monitor are considered to be unobservable actions, that is, \(B^m \sim B\).
Proof
Following Sect. A.2, we need to exhibit a relation \(R\) between the set of states \(Q^m\) of \(B^m\) and the set of states \(Q\) of \(B\). We define \(R {\stackrel{\mathrm{def}}{=}}\{(q^m,q) \mid q^m\in B^m\wedge { stable}(q^m) = q \}\). We shall prove the three next assertions to establish that \(R\) is a weak bisimulation:

(i)
\(\forall (q^m,q) \in R: q^m \stackrel{\beta }{\longrightarrow }_m r^m\Longrightarrow (r^m,q) \in R\).

(ii)
\(\forall (q^m,q) \in R: q^m \stackrel{a}{\longrightarrow }_m r^m\Longrightarrow \exists r \in Q: q \stackrel{a}{\longrightarrow } r\wedge (r^m,r) \in R\).

(iii)
\(\forall (q^m,q) \in R: q \stackrel{a}{\longrightarrow } r\Rightarrow \exists r^m \in Q^m: q^m \stackrel{\beta ^*a}{\longrightarrow }_m r^m \wedge (r^m,r) \in R\). \(\square \)
Proof of (i)
Let us suppose that \(q^m \stackrel{\beta }{\longrightarrow }_m r^m\), we have two cases according to the partition of interactions proposed in Sectionsec:proof:defslemmas:

Case \(\beta \in A^1\). Then \(\beta \subseteq \bigcup _{i=1}^n\{B_i^m.p^m\}\). Let \(q^m=(q_1^m,\ldots ,q_n^m, {q_{{ mon}}})\) and \(r^m=(r_1^m,\ldots ,r_n^m, r_{mon})\). Because \((q^m,q) \in R\), we have \(q = { stable}(q^m) = ({ stable}_1(q_1^m), \ldots , { stable}_n(q_n^m))\). We distinguish two subcases according to whether \(q_i^m\) is stable or not.

Let us suppose that \(q_i^m\) is a stable state, then we have \({ stable}_i(q_i^m) = q_i^m\). From the local state \(q_i^m\) of the atomic component \(B_i\), port \(B_i^m.p^m\) is not enabled, hence after executing an interaction consisting only of ports \(p^m\) the local state \(q_i^m\) does not change, that is, \(q_i^m = r_i^m\) and \({ stable}_i(r_i^m) = { stable}_i(q_i^m)\).

Let us suppose \(q_i^m\) is not a stable state, then \(\exists q^{\prime }\in Q^m_i: { stable}_i(q_i^m)=q^{\prime } \ne q_i^m\). From the local state \(q_i^m\), the port \(B_i^m.p^m\) is enabled. Moreover, after executing the interaction \(\beta \), the local state \(q_i^m\) becomes \(r_i^m\), where \(r_i^m = { stable}_i(q_i^m) = q^{\prime }\) (because of maximal progress, see Definition 9), and \(r_i^m \in B_i.L\) (see Definition 18), that is, \({ stable}_i(r_i^m) = r_i^m = { stable}_i(q_i^m)\). Therefore, \({ stable}(r^m) = ({ stable}_1(q_1^m), \ldots , { stable}_n(q_n^m)) = { stable}(q^m) = q\), thus \((r^m,q) \in R\).


Case \(\beta \in A^2\), that is, \(\beta = \{M^A.{p_{{ intern}}}\}\). Hence, after executing \(\beta \) none of the local states \(q_i^m\) for \(i \in [1,n]\) change (that is, \(r_i^m = q_i^m\) for \(i \in [1,n]\)). Therefore, \({ stable}(r^m) = ({ stable}_1(q_1^m), \ldots , { stable}_n(q_n^m)) = { stable}(q^m) = q\), thus \((r^m,q) \in R\).
Proof of (ii)
Suppose that \(q^m \stackrel{a}{\longrightarrow }_m r^m\). Then \({ stable}(q^m) = q^m\), that is, \({ is\_stable}(q^m)\). Let \(q^m=(q_1^m,\ldots ,q_n^m, {q_{{ mon}}})\) and \(q =(q_1^m,\ldots ,q_n^m)\), from state \(q\) interaction \(a\) is possible. Let \(r\) be the next state after executing \(a\), that is, \(q \stackrel{a}{\longrightarrow } r\). We distinguish two cases according to whether \(r^m\) is stable or not:

If \({ is\_stable}(r^m)\), then \(r = (r_1^m,\ldots ,r_n^m)\) where \(r^m=(r_1^m,\ldots ,r_n^m, r_{mon})\) (Definition 18). Hence, \({ stable}(r^m) = ({ stable}_1(r_1^m), \ldots , { stable}_n(r_n^m)) = (r_1^m,\ldots ,r_n^m) = r\), that is, \((r^m,r) \in R\).

If \(\lnot { is\_stable}(r^m)\), let \(s^m\) be the next state in the run after \(r^m\), that is, \(r^m \stackrel{\beta }{\longrightarrow }_m s^m\). Lemma 3 ensures that \(s^m\) is stable (\({ is\_stable}(s^m)\)), hence the interaction \(\beta \) is such that \(\beta \subseteq \cup _{i=1}^n\{B_i^m.p^m\}\). As \(s^m\) is stable, then \({ stable}(s^m) = (s_1^m, \ldots , s_n^m)\) (Lemma 1), where \(s^m = (s_1^m,\ldots ,s_n^m, s_{mon})\). Moreover, since \(\beta \subseteq \cup _{i=1}^n\{B_i^m.p^m\}\), then \({ stable}(r^m) = (s_1^m,\ldots , s_n^m)\). Definition 18 ensures that \(r = (s_1^m,\ldots , s_n^m)\). That is, \({ stable}(r^m) = r\), thus \((r^m,r) \in R\).
Proof of (iii)
Suppose that \(q \stackrel{a}{\longrightarrow } r\). Let \(q^m = (q_1^m,\ldots ,q_n^m,{q_{{ mon}}})\), where \({ stable}(q^m)=(q_1,\ldots ,q_n)\). We have two cases:

If \({ is\_stable}(q^m)\), then \(q^m \stackrel{a}{\longrightarrow }_m r^m\) and \((r^m,r) \in R\). In this case, we can conduct the same reasoning followed for the case (ii), and consider two cases for \(r^m\).

If \(\lnot { is\_stable}(q^m)\), let \(q^{\prime m}\) be the next state after \(q^{\prime m}\) (\(q^m \stackrel{\beta }{\longrightarrow }_m q^{\prime m}\)). Lemma 3 ensures that \(q^{\prime m}\) is stable (\({ is\_stable}(q^{\prime m})\)). Hence, \(q^{\prime m}=(q_1,\ldots ,q_n,q^{\prime }_{mon})\), that is, \(q^{\prime m} \stackrel{a}{\longrightarrow }_m r^m\) and \((r^m,r) \in R\). In this case, we can conduct the same reasoning followed for the case (ii), and consider two cases for \(r^m\).
A.4 Correctness of our approach
The correctness of our approach is supported by two arguments.
First, the instrumented system is observationally equivalent to the noninstrumented system where the actions used to monitor the system are considered unobservable (Proposition 1). It is a standard assumption in runtime verification frameworks for monolithic programs to assume that the instrumentation code does not take part in the semantics of the monitored program. Thus the behavior of a monitored monolithic program that is considered to be relevant is built by considering the original actions (present before instrumentation) to be observable, and, the behavior generated by the instrumentation code plus the code of the monitor to be unobservable. Our instrumentation thus ensures that if the initial system produces an execution, then the same execution will be produced in the instrumented system, up to the actions needed to monitor the system.
The second argument is the correctness of the verdicts produced by the monitor. This is ensured by the freshness of the data received by the monitor, and, the fact that the monitor always receives the necessary information. Indeed, if the state of the system is modified in such a way that it influences the truthvalue of the monitored property, it means that at least one atomic proposition of one event in the specification has possibly changed. Then, according to the definition of the function \(c\_v\), the new values of the involved elements in the specification are transmitted to the monitor. Lemma 2 and the priorities given to the interactions of the monitor ensures that the system cannot move before the monitor has finished to treat the new state and has produced a verdict.
Rights and permissions
About this article
Cite this article
Falcone, Y., Jaber, M., Nguyen, T. et al. Runtime verification of componentbased systems in the BIP framework with formallyproved sound and complete instrumentation. Softw Syst Model 14, 173–199 (2015). https://doi.org/10.1007/s102700130323y
Received:
Revised:
Accepted:
Published:
Issue Date:
Keywords
 Runtime verification
 Componentbased systems
 Instrumentation
 Formal methods