Skip to main content
Log in

Decentralised LTL monitoring

  • Published:
Formal Methods in System Design Aims and scope Submit manuscript

Abstract

Users wanting to monitor distributed or component-based systems often perceive them as monolithic systems which, seen from the outside, exhibit a uniform behaviour as opposed to many components displaying many local behaviours that together constitute the system’s global behaviour. This level of abstraction is often reasonable, hiding implementation details from users who may want to specify the system’s global behaviour in terms of a linear-time temporal logic (LTL) formula. However, the problem that arises then is how such a specification can actually be monitored in a distributed system that has no central data collection point, where all the components’ local behaviours are observable. In this case, the LTL specification needs to be decomposed into sub-formulae which, in turn, need to be distributed amongst the components’ locally attached monitors, each of which sees only a distinct part of the global behaviour. The main contribution of this paper is an algorithm for distributing and monitoring LTL formulae, such that satisfaction or violation of specifications can be detected by local monitors alone. We present an implementation and show that our algorithm introduces only a negligible delay in detecting satisfaction/violation of a specification. Moreover, our practical results show that the communication overhead introduced by the local monitors is generally lower than the number of messages that would need to be sent to a central data collection point. Furthermore, our experiments strengthen the argument that the algorithm performs well in a wide range of different application contexts, given by different system/communication topologies and/or system event distributions over time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The assumption of atomic proposition locality is not a restriction of our approach but it simplifies the presentation.

  2. Note that \(\mathcal {L}(\varphi )\), being a liveness language, does not have any bad prefixes.

  3. Compared to RuleR [14], the state-of-art rule-based runtime verification tool, for LTL specifications, our simplification function produced better results (see [26]).

  4. Our experiments show that this way of measuring the size of a formula is more representative of how difficult it is to progress it in a decentralised manner. Formulae of size above 6 are not realistic in practice.

  5. We conducted benchmarks (not reported here) with traces generated with the Bernouilli probability distributions with parameters 0.9 and 0.99 and the results followed the same trends.

  6. Without loss of generality, we assume that the priority of a monitor is given by its index. If this hypothesis does not hold initially, the indexes of components can be re-arranged prior to monitoring so that this hypothesis holds.

References

  1. Amir P (1977) The temporal logic of programs. In: Foundations of Computer Science (FOCS), IEEE, pp 46–57

  2. Seyster J, Dixit K, Huang X, Grosu R, Havelund K, Smolka SA, Stoller SD, Zadok E (2010) Aspect-oriented instrumentation with GCC. In: Barringer H, Falcone Y, Finkbeiner B, Havelund K, Lee I, Pace GJ, Rosu G, Sokolsky O, Tillmann N (eds.) (2010) Proceedings of International Conference on Runtime Verification (RV), volume 6418 of LNCS. Springer, Berlin, pp 405–420

  3. Meredith P, Rosu G (2010) Runtime verification with the RV System. In Barringer H, Falcone Y, Finkbeiner B, Havelund K, Lee I, Pace GJ, Rosu G, Sokolsky O, Tillmann N (eds.) (2010) Proceedings of International Conference on Runtime Verification (RV), volume 6418 of LNCS. Springer, Berlin, pp 136–152

  4. Hallé S, Villemaire R (2010) Runtime verification for the web-a tutorial introduction to interface contracts in web applications. In: Barringer H, Falcone Y, Finkbeiner B, Havelund K, Lee I, Pace GJ, Rosu G, Sokolsky O, Tillmann N (eds.) (2010) Proceedings of International Conference on Runtime Verification (RV), volume 6418 of LNCS. Springer, Berlin, pp 106–121

  5. Gunzert M, Nägele A (1999) Component-based development and verification of safety critical software for a brake-by-wire system with synchronous software components. In: International Symposium on SE for Parallel and Distributed Systems (PDSE), IEEE, p 134

  6. Lukasiewycz M, Glaß M, Teich J, Milbredt P (2009) FlexRay schedule optimization of the static segment. In: 7th IEEE/ACM International Conference on Hardware/software codesign and system synthesis (CODES+ISSS), ACM, pp 363–372

  7. Pop T, Pop P, Eles P, Peng Z, Andrei Alexandru (2008) Timing analysis of the FlexRay communication protocol. Real-Time Syst 39:205–235

    Article  MATH  Google Scholar 

  8. Miller SP, Whalen MW, Cofer DD (2010) Software model checking takes off. Commun ACM 53:58–64

    Article  Google Scholar 

  9. Pigan R, Metter M (2008) Automating with PROFINET: industrial communication based on industrial ethernet. Wiley, New York

    Google Scholar 

  10. Felser M (2005) Real-time ethernet-industry prospective. Proc IEEE 93(6):1118–1129

    Article  Google Scholar 

  11. Serna Oliver R, Craciunas SS, Stoger G (2014) Analysis of deterministic ethernet scheduling for the industrial internet of things. In: Computer aided modeling and design of communication links and networks (CAMAD), 2014 IEEE 19th International Workshop on IEEE, pp 320–324

  12. Havelund K, Rosu G (2001) Monitoring programs using rewriting. In: 16th IEEE International conference on automated software engineering (ASE 2001), pp 135–143

  13. Roşu G, Havelund K (2005) Rewriting-based techniques for runtime verification. Autom Softw Eng 12(2):151–197

    Article  Google Scholar 

  14. Barringer H, Rydeheard DE, Havelund K (2010) Rule systems for run-time monitoring: from Eagle to RuleR. J Log Comput 20(3):675–706

    Article  MathSciNet  MATH  Google Scholar 

  15. Bauer A, Falcone Y (2012) Decentralised LTL monitoring. In: Giannakopoulou D, Mery D (eds.) Proceedings of the 18th international symposium on formal methods (FM), volume 7436 of Lecture Notes in Computer Science. Springer, Berlin, pp 85–100

  16. Jantsch A (2003) Modeling Embedded systems and SoC’s: concurrency and time in models of computation. Morgan Kaufmann, San Francisco

    Google Scholar 

  17. Pnueli A (1977) The temporal logic of programs. In: SFCS’77: Proceedings of the 18th annual symposium on foundations of computer science, Washington, DC, USA. IEEE Computer Society, pp 46–57

  18. Bauer A, Leucker M, Schallhart C (2010) Comparing LTL semantics for runtime verification. Logic Comput 20(3):651–674

    Article  MathSciNet  MATH  Google Scholar 

  19. Bauer A, Leucker M, Schallhart C (2011) Runtime verification for LTL and TLTL. ACM Trans Softw Eng Methodol (TOSEM) 20(4):14

    Article  Google Scholar 

  20. Bacchus F, Kabanza F (1998) Planning for temporally extended goals. Ann Math Artif Intell 22:5–27

    Article  MathSciNet  MATH  Google Scholar 

  21. Kupferman O, Vardi MY (2001) Model checking of safety properties. Formal Methods Syst Design 19(3):291–314

    Article  MathSciNet  MATH  Google Scholar 

  22. Sen K, Roşu G, Agha G (2003) Generating optimal linear temporal logic monitors by coinduction. In: Advances in computing science—ASIAN 2003. Progamming languages and distributed computation programming languages and distributed computation. Springer, Berlin, pp 260–275

  23. Sistla AP, Clarke EM (1985) The complexity of propositional linear temporal logics. J ACM 32(3):733–749

    Article  MathSciNet  MATH  Google Scholar 

  24. Lichtenstein O, Pnueli A, Zuck LD (1985) The glory of the past. In: Conference on logic of programs. Springer, pp 196–218

  25. Markey N (2003) Temporal logic with past is exponentially more succinct, concurrency column. Bull EATCS 79:122–128

    MathSciNet  MATH  Google Scholar 

  26. DecentMon Website. http://decentmonitor.forge.imag.fr

  27. Dwyer MB, Avrunin GS, Corbett JC (1999) Patterns in property specifications for finite-state verification. In: International Conference on Software Engineering (ICSE). ACM, pp 411–420

  28. Specification Patterns Website. http://patterns.projects.cis.ksu.edu/

  29. Wang Y, Yoo T-S, Lafortune S (2004) New results on decentralized diagnosis of discrete event systems. In: Proceedings of 42nd Annual Allerton Conference on Communication, Control, and Computing, October 2004

  30. Wang Y, Yoo T-S, Lafortune S (2007) Diagnosis of discrete event systems using decentralized architectures. Discrete Event Dyn Syst 17:233–263

    Article  MathSciNet  MATH  Google Scholar 

  31. Cassez F (2010) The complexity of codiagnosability for discrete event and timed systems. In: Bouajjani A, Chin W-N (eds.) ATVA, volume 6252 of Lecture Notes in Computer Science. Springer, pp 82–96

  32. Tripakis S (2005) Decentralized observation problems. In: 44th IEEE Conference on decision and control (CDC-ECC), IEEE, pp 6–11

  33. Sen K, Vardhan A, Agha G, Rosu G (2006) Decentralized runtime analysis of multithreaded applications. In: 20th parallel and distributed processing symposium (IPDPS). IEEE

  34. Genon A, Massart T, Meuter C (2006) Monitoring distributed controllers. In: Formal methods (FM), volume 4085 of LNCS. Springer, pp 557–572

  35. Falcone Y, Cornebize T, Fernandez J-C (2014) Efficient and generalized decentralized monitoring of regular languages. In: Erika A, Catuscia P (eds.) FORTE, volume 8461 of Lecture Notes in Computer Science.Springer, pp 66–83

  36. Barringer H, Falcone Y, Finkbeiner B, Havelund K, Lee I, Pace GJ, Rosu G, Sokolsky O, Tillmann N (eds.) (2010) Proceedings of International Conference on Runtime Verification (RV), volume 6418 of LNCS. Springer

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yliès Falcone.

Appendix 1: Proofs

Appendix 1: Proofs

1.1 (A) Proofs for Sect. 3

1.1.1 Proof of Lemma 1 (p. 6)

The following inductive proof follows the argument conveyed by Proposition 3 of [20]. For completeness sake, here we want to give the complete, formal, detailed proof.

The lemma is a direct consequence of the semantics of \(\mathrm {LTL}\) and the definition of progression (Definition 2). Recall that this lemma states that the progression function “mimics” \(\mathrm {LTL}\) semantics on some event \(\sigma \).

Proof

We shall prove the following statement:

$$\begin{aligned} \forall \sigma \in {\varSigma }. \forall w\in {\varSigma }^\omega . \forall \varphi \in \mathrm {LTL}\,.\ \sigma \cdot w\models \varphi \iff w\models P(\varphi ,\sigma ). \end{aligned}$$

Let us consider an event \(\sigma \in {\varSigma }\) and an infinite trace \(w\in {\varSigma }^\omega \), the proof is done by a structural induction on \(\varphi \in \mathrm {LTL}\,\).

Base case: \(\varphi \in \{\top ,\bot ,p\in {{{AP}_{}}}\}\).

  • Case \(\varphi =\top \). This case is trivial since, according to the definition of the progression function, \(\forall \sigma \in {\varSigma }.\ P(\top ,\sigma )=\top \). Moreover, according to the \(\mathrm {LTL}\) semantics of \(\top \), \(\forall w\in {\varSigma }^\omega .\ w\models \top \).

  • Case \(\varphi =\bot \). This case is symmetrical to the previous one.

  • Case \(\varphi = p\in {{{AP}_{}}}\). Recall that, according to the progression function for atomic propositions, we have \(P(p,\sigma )=\top \) if \(p\in \sigma \) and \(\bot \) otherwise.

    • Let us suppose that \(\sigma \cdot w\models p\). According to the \(\mathrm {LTL}\) semantics of atomic propositions, it means that \(p\in \sigma \), and thus \(P(p,\sigma )=\top \). And, due to the \(\mathrm {LTL}\) semantics of \(\top \), we have \(\forall w\in {\varSigma }^\omega .\ w\models \top \).

    • Let us suppose that \(w\models P(p,\sigma )\). Since \(P(p,\sigma )\in \{\top ,\bot \}\), we have necessarily \(P(p,\sigma )=\top \). According to the progression function, \(P(p,\sigma )=\top \) amounts to \(p\in \sigma \). Using the \(\mathrm {LTL}\) semantics of atomic propositions, we deduce that \(\sigma \cdot w\models p\).

Induction case: \(\varphi \in \{\lnot \varphi ',\varphi _1\vee \varphi _2,\varphi _1\wedge \varphi _2,\mathbf {G}\varphi ',\mathbf {F}\varphi ', \mathbf {X}\varphi ',\varphi _1\mathbf {U}\varphi _2\}\). Our induction hypothesis states that the lemma holds for some formulae \(\varphi ',\varphi _1,\varphi _2\in \mathrm {LTL}\,\).

  • Case \(\varphi =\lnot \varphi '\). On the one hand, using the progression function for \(\lnot \), we have \(P(\lnot \varphi ',\sigma )=\lnot P(\varphi ',\sigma )\). On the other hand, using the \(\mathrm {LTL}\) semantics of operator \(\lnot \), we have \(w\models \varphi \iff w\not \models \lnot \varphi \). Thus, we have \(\sigma \cdot w\models \lnot \varphi '\) iff \(\sigma \cdot w\not \models \varphi '\) iff (induction hypothesis on \(\varphi '\)) \(w\not \models P(\varphi ',\sigma )\) iff \(w\models \lnot P(\varphi ',\sigma )\) iff \(w\models P(\lnot \varphi ',\sigma )\).

  • Case \(\varphi =\varphi _1\vee \varphi _2\). Recall that, according to the progression function for operator \(\vee \), we have \(P(\varphi _1\vee \varphi _2,\sigma )= P(\varphi _1,\sigma )\vee P(\varphi _2,\sigma )\).

    • Let us suppose that \(\sigma \cdot w\models \varphi _1\vee \varphi _2\). We distinguish again two sub-cases: \(\varphi _1\vee \varphi _2=\top \) or \(\varphi _1\vee \varphi _2\ne \top \). If \(\varphi _1\vee \varphi _2=\top \), then this case reduces to the case where \(\varphi =\top \), already treated. If \(\varphi _1\vee \varphi _2\ne \top \), it means that either \(\sigma \cdot w\models \varphi _1\) or \(\sigma \cdot w\models \varphi _2\). Let us treat the case where \(\sigma \cdot w\models \varphi _1\) (the other case is similar). From \(\sigma \cdot w\models \varphi _1\), we can apply the induction hypothesis on \(\varphi _1\) to obtain \(w\models P(\varphi _1,\sigma )\), then, \(w\models P(\varphi _1,\sigma )\vee P(\varphi _1,\sigma ) = P(\varphi _1\vee \varphi _2,\sigma )\).

    • Let us suppose that \(w\models P(\varphi _1\vee \varphi _2,\sigma )=P(\varphi _1,\sigma )\vee P(\varphi _2,\sigma )\). We distinguish again two sub-cases: \(P(\varphi _1\vee \varphi _2,\sigma )=\top \) or \(P(\varphi _1\vee \varphi _2,\sigma )\ne \top \).

      • If \(P(\varphi _1\vee \varphi _2,\sigma )=\top \), then we again distinguish two sub-cases:

        • If \(P(\varphi _1,\sigma )=\top \) or \(P(\varphi _2,\sigma )=\top \). Let us treat the case where \(P(\varphi _1,\sigma )=\top \) (the other case is similar). Applying the induction hypothesis on \(\varphi _1\), we have \(\sigma \cdot w\models \varphi _1\iff w\models P(\varphi _1,\sigma )\). Then, consider \(w\in {\varSigma }^\omega \), we have \(\sigma \cdot w\models \varphi _1\), and consequently \(\sigma \cdot w\models \varphi _1\vee \varphi _2\).

        • If \(P(\varphi _1,\sigma )\ne \top \) and \(P(\varphi _2,\sigma )\ne \top \), then we have \(P(\varphi _1,\sigma )=\lnot P(\varphi _2,\sigma )\). Applying the induction hypothesis on \(\varphi _1\) and \(\varphi _2\), we obtain \(\sigma \cdot w\models \varphi _1\iff \sigma \cdot w\not \models \varphi _2\). Let us consider \(w\in {\varSigma }^\omega \). If \(\sigma \cdot w\models \varphi _1\), then we have \(\sigma \cdot w\models \varphi _1\vee \varphi _2\). Else (\(\sigma \cdot w\not \models \varphi _1\)), we have \(\sigma \models \varphi _2\), and then \(\sigma \cdot w\models \varphi _1\vee \varphi _2\).

      • If \(P(\varphi _1\vee \varphi _2,\sigma )\ne \top \), then we have either \(w\models P(\varphi _1,\sigma )\) or \(w\models P(\varphi _2,\sigma )\). Let us treat the case where \(w\models P(\varphi _1,\sigma )\) (the other case is similar). From \(w\models P(\varphi _1,\sigma )\), we can apply the induction hypothesis on \(\varphi _1\) to obtain \(\sigma \cdot w\models \varphi _1\), and thus \(\sigma \cdot w\models \varphi _1\vee \varphi _2\).

  • Case \(\varphi =\varphi _1\wedge \varphi _2\). This case is similar to the previous one.

  • Case \(\varphi =\mathbf {G}\varphi '\). Recall that, according to the progression function for operator \(\mathbf {G}\), \(P(\mathbf {G}\varphi ',\sigma )=P(\varphi ',\sigma )\wedge \mathbf {G}\varphi '\).

    • Let us suppose that \(\sigma \cdot w\models \mathbf {G}\varphi '\). According to the \(\mathrm {LTL}\) semantics of operator \(\mathbf {G}\), we have \(\forall i\in \mathbb {N}^{\ge 0}.\ (\sigma \cdot w)^i\models \varphi '\). In particular, it implies that \((\sigma \cdot w)^0\models \varphi '\), i.e., \(\sigma \cdot w\models \varphi '\) and \(\forall i\in \mathbb {N}^{\ge 0}.\ (\sigma \cdot w^1)^i\models \varphi '\), i.e., \((\sigma \cdot w)^1=w\models \mathbf {G}\varphi '\). Using the induction hypothesis on \(\varphi '\), from \(\sigma \cdot w\models \varphi '\), we obtain \(w\models P(\varphi ',\sigma )\). As expected, according to the \(\mathrm {LTL}\) semantics of operator \(\wedge \), we have \(w\models P(\mathbf {G}\varphi ',\sigma )\wedge \mathbf {G}\varphi '=P(\mathbf {G}\varphi ',\sigma )\).

    • Let us suppose that \(w\models P(\mathbf {G}\varphi ',\sigma )=P(\varphi ',\sigma )\wedge \mathbf {G}\varphi '\). It follows that \(w\models P(\varphi ',\sigma )\), and thus, using the induction hypothesis on \(\varphi '\), \(\sigma \cdot w\models \varphi '\). Using the \(\mathrm {LTL}\) semantics of operator \(\mathbf {G}\), from \(\sigma \cdot w\models \varphi '\) and \(w\models \mathbf {G}\varphi '\), we deduce \(\forall i\in \mathbb {N}^{\ge 0}.\ w^i\models \varphi '\), and then \(\forall i\in \mathbb {N}.\ (\sigma \cdot w)^i\models \varphi '\), i.e., \(\sigma \cdot w\models \mathbf {G}\varphi '\).

  • Case \(\varphi =\mathbf {F}\varphi '\). This case is similar to the previous one.

  • Case \(\varphi =\mathbf {X}\varphi '\). On one hand, using the progression function for \(\mathbf {X}\), we have \(P(\mathbf {X}\varphi ',\sigma )=\varphi '\). On the other hand, using the \(\mathrm {LTL}\) semantics of operator \(\mathbf {X}\), we have \(\sigma \cdot w\models \mathbf {X}\varphi '\) iff \(w\models \varphi '\). Thus, we have \(\sigma \cdot w\models \mathbf {X}\varphi '\) iff \(w\models \varphi '\) iff (induction hypothesis on \(\varphi '\)) \(w\models P(\mathbf {X}\varphi ',\sigma )\).

  • Case \(\varphi =\varphi _1\mathbf {U}\varphi _2\). Recall that, according to the progression function for operator \(\mathbf {U}\), \(P(\varphi _1\mathbf {U}\varphi _2,\sigma )=P(\varphi _2,\sigma )\vee (P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2)\).

    • Let us suppose that \(\sigma \cdot w\models \varphi _1\mathbf {U}\varphi _2\). According to the \(\mathrm {LTL}\) semantics of operator \(\mathbf {U}\), we have \(\exists i\in \mathbb {N}^{\ge 0}.\ (\sigma \cdot w)^i\models \varphi _2 \wedge \forall 0\le l <i.\ (\sigma \cdot w)^l\models \varphi _1\). Let us distinguish two cases: \(i=0\) and \(i>0\).

      • If \(i=0\), then we have \(\sigma \cdot w\models \varphi _2\). Applying the induction hypothesis on \(\varphi _2\), we have \(w\models P(\varphi _2,\sigma )\), and consequently \(w\models P(\varphi _1\mathbf {U}\varphi _2,\sigma )\).

      • Else (\(i>0\)), we have \(\forall 0\le l<i.\ (\sigma \cdot w)^l\models \varphi _1\). Consequently, we have \((\sigma \cdot w)^0 \models \varphi _1\), and thus \(\sigma \cdot w\models \varphi _1\). Moreover, from \(\forall 0\le l<i.\ (\sigma \cdot w)^l\models \varphi _1\), we deduce \(\forall 0\le l<i-1.\ w^l\models \varphi _1\). From \((\sigma \cdot w)^i\models \varphi _2\), we deduce \(w^{i-1}\models \varphi _2\). From \(w^{i-1}\models \varphi _2\) and \(\forall 0\le l<i.\ (\sigma \cdot w)^l\models \varphi _1\), we deduce \(w\models \varphi _1\mathbf {U}\varphi _2\). Applying, the induction hypothesis on \(\varphi _1\), from \(\sigma \cdot w\models \varphi _1\), we obtain \(w\models P(\varphi _1,\sigma )\). Finally, from \(w\models \varphi _1\mathbf {U}\varphi _2\) and \(w\models P(\varphi _1,\sigma )\), we obtain \(w\models P(\varphi _1\mathbf {U}\varphi _2,\sigma )\).

    • Let us suppose that \(w\models P(\varphi _1\mathbf {U}\varphi _2,\sigma )\).

      We distinguish two cases: \(P(\varphi _1\mathbf {U}\varphi _2,\sigma )=\top \) and \(P(\varphi _1\mathbf {U}\varphi _2,\sigma )\ne \top \).

      • If \(P(\varphi _1\mathbf {U}\varphi _2,\sigma )=P(\varphi _2,\sigma )\vee (P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2)=\top \). We distinguish again two sub-cases.

        • If \(P(\varphi _2,\sigma )=\top \) or \(P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2=\top \). If \(P(\varphi _2,\sigma )=\top \), then applying the induction hypothesis on \(\varphi _2\), we have \(\sigma \cdot w\models \varphi _2\iff w\models \top \). Then, from \(\sigma \cdot w\models \varphi _2\), we obtain, according to the \(\mathrm {LTL}\) semantics of operator \(\mathbf {U}\), \(\sigma \cdot w\models \varphi _1\mathbf {U}\varphi _2\). If \(P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2=\top \), we directly deduce that \(\varphi _1\mathbf {U}\varphi _2=\top \), and then this case reduces to the case where \(\varphi =\top \), already treated.

        • If \(P(\varphi _2,\sigma )\ne \top \) and \(P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2\ne \top \), then we have \(P(\varphi _2,\sigma )=\lnot (P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2)=\lnot P(\varphi _1,\sigma )\vee \lnot (\varphi _1\mathbf {U}\varphi _2)\). Applying the induction hypothesis on \(\varphi _1\) and \(\varphi _2\), we have \(\sigma \cdot w\models \varphi _1\iff w\models P(\varphi _1,\sigma )\), and \(\sigma \cdot w\models \varphi _2\iff w\models P(\varphi _2,\sigma )\), and thus \(\sigma \cdot w\models \varphi _2\iff (\sigma \cdot w\not \models \varphi _1\vee w\not \models \varphi _1\mathbf {U}\varphi _2)\). Let us now follow the \(\mathrm {LTL}\) semantics of operator \(\mathbf {U}\) and consider the two cases: \(\sigma \cdot w\models \varphi _2\) or \(\sigma \cdot w\not \models \varphi _2\). If \(\sigma \cdot w\models \varphi _2\), thus \(\sigma \cdot w\models \varphi _1\mathbf {U}\varphi _2\) (according to the \(\mathrm {LTL}\) semantics of \(\mathbf {U}\)). Else (\(\sigma \cdot w\not \models \varphi _2\)), then \(\sigma \cdot w\models \varphi _1\) and \(w\models \varphi _1\mathbf {U}\varphi _2\), and thus \(\sigma \cdot w\models \varphi _1\mathbf {U}\varphi _2\).

      • If \(P(\varphi _1\mathbf {U}\varphi _2,\sigma )\ne \top \), it means that either \(w\models P(\varphi _2,\sigma )\) or \(w\models P(\varphi _1,\sigma )\) \(\wedge \varphi _1\mathbf {U}\varphi _2\).

        • If \(w\models P(\varphi _2,\sigma )\), then applying the induction hypothesis on \(\varphi _2\), we have \(\sigma \cdot w\models \varphi _2\). Then, following the \(\mathrm {LTL}\) semantics of operator \(\mathbf {U}\), we obtain \(\sigma \cdot w\models \varphi _1\mathbf {U}\varphi _2\).

        • If \(w\models P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2\), then we have \(w\models P(\varphi _1,\sigma )\) and \(w\models \varphi _1\mathbf {U}\varphi _2\). Applying the induction hypothesis on \(\varphi _1\), we have \(\sigma \cdot w\models \varphi _1\). From \(w\models \varphi _1\mathbf {U}\varphi _2\), we have \(\exists i\in \mathbb {N}^{\ge 0}.\ w^i\models \varphi _2\wedge \forall 0\le l<i.\ w^l\models \varphi _1\). It implies that \((\sigma \cdot w)^{i+1}\models \varphi _2\) and \(\forall 0<l< i+1.\ (\sigma \cdot w)^l\models \varphi _1\). Using, \(\sigma \cdot w\models \varphi _1\), i.e., \((\sigma \cdot w)^0\models \varphi _1\) and the \(\mathrm {LTL}\) semantics of operator \(\mathbf {U}\), we finally obtain \(\sigma \cdot w\models \varphi _1\mathbf {U}\varphi _2\).\(\square \)

1.1.2 Proof of Lemma 2 (p. 6)

We shall prove the following statement.

$$\begin{aligned} \begin{array}{lll} \forall \varphi \in LTL. \forall \sigma \in {\varSigma }.&{} &{} P(\varphi , \sigma ) = \top \implies \sigma \in {{\mathrm{good}}}(\varphi )\\ &{} \wedge &{} P(\varphi , \sigma ) = \bot \implies \sigma \in {{\mathrm{bad}}}(\varphi ). \end{array} \end{aligned}$$

The proof uses the definition of the \(\mathrm {LTL}\) semantics (Definition 1), the definition of good and bad prefixes (Definition 2), the progression function (Definition 3), and Lemma 1.

Proof

According to Lemma 1, we have \(\forall \sigma \in {\varSigma }.\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi \iff w\models P(\varphi ,\sigma )\). Consequently, we have \(\forall \sigma \in {\varSigma }.\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi \iff \forall \sigma \in {\varSigma }.\forall w\in {\varSigma }^\omega .\ w\models P(\varphi ,\sigma )\) and \(\forall \sigma \in {\varSigma }.\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\not \models \varphi \iff \forall \sigma \in {\varSigma }.\forall w\in {\varSigma }^\omega .\ w\not \models P(\varphi ,\sigma )\). Consequently, when \(P(\varphi ,\sigma )=\top \), we have \(\forall \sigma \in {\varSigma }.\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi \), i.e., \(\sigma \in {{\mathrm{good}}}(\varphi )\). Similarly, when \(P(\varphi ,\sigma )=\bot \), we have \(\forall \sigma \in {\varSigma }.\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\not \models \varphi \), i.e., \(\sigma \in {{\mathrm{bad}}}(\varphi )\).

The proof can also be obtained in a more detailed manner as shown below. Let us consider \(\sigma \in {\varSigma }\) and \(\varphi \in \mathrm {LTL}\,\). The proof is performed by a structural induction on \(\varphi \).

Base case: \(\varphi \in \{\top ,\bot ,p\in {{{AP}_{}}}\}\).

  • Case \(\varphi =\top \). In this case, the proof is trivial since \(P(\top ,\sigma )=\top \) and, according to the \(\mathrm {LTL}\) semantics of \(\top \) and the definition of good prefixes, \({{\mathrm{good}}}(\top )={\varSigma }^*\).

  • Case \(\varphi =\bot \). Similarly, in this case, the proof is trivial since \(P(\bot ,\sigma )=\bot \) and \({{\mathrm{bad}}}(\bot )={\varSigma }^*\).

  • Case \(\varphi =p\in AP\).

    Let us suppose that \(P(\varphi ,\sigma )=\top \). According to the progression function, it means that \(p\in \sigma \). Moreover, since \(\varphi =p\), according to the \(\mathrm {LTL}\) semantics of atomic propositions, for any \(w\in {\varSigma }^\omega \), we have \(\sigma \cdot w \models \varphi \). According to the definition of good prefixes, it means that \(\sigma \in {{\mathrm{good}}}(\varphi )\).

    The proof for \(P(\varphi ,\sigma )=\bot \implies \sigma \in {{\mathrm{bad}}}(\varphi )\) is similar.

Induction case: \(\varphi \in \{\lnot \varphi ',\varphi _1\vee \varphi _2,\varphi _1\wedge \varphi _2,\mathbf {G}\varphi ',\mathbf {F}\varphi ',\mathbf {X}\varphi ',\varphi _1\mathbf {U}\varphi _2\}\). The induction hypothesis states that the lemma holds for some formulae \(\varphi ',\varphi _1,\varphi _2\in \mathrm {LTL}\,\).

  • Case \(\varphi =\lnot \varphi '\). In this case, the result is obtained by using the induction hypothesis on \(\varphi '\) and the equality’s \(\bot =\lnot \top \) and \(\lnot (\lnot \varphi )=\varphi \).

  • Case \(\varphi =\varphi _1\vee \varphi _2\). Recall that, according to the progression function for operator \(\vee \), \(P(\varphi _1\vee \varphi _2,\sigma )=P(\varphi _1,\sigma )\vee P(\varphi _2,\sigma )\).

    Let us suppose that \(P(\varphi ,\sigma )=\top \). We distinguish two cases:

    • If \(P(\varphi _1,\sigma )=\top \) or \(P(\varphi _2,\sigma )=\top \). Let us treat the case where \(P(\varphi _1,\sigma )=\top \). Using the induction hypothesis on \(\varphi _1\), we have \(\sigma \in {{\mathrm{good}}}(\varphi _1)\). According to the definition of good prefixes, we have \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi _1\). We easily deduce, using the \(\mathrm {LTL}\) semantics of operator \(\vee \), that \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w \models \varphi _1\vee \varphi _2\), that is, \(\sigma \in {{\mathrm{good}}}(\varphi _1\vee \varphi _2)\).

    • If \(P(\varphi _1,\sigma )\ne \top \) and \(P(\varphi _2,\sigma )\ne \top \). Since \(P(\varphi ,\sigma )=\top \), we have \(P(\varphi _1,\sigma )=\lnot P(\varphi _2,\sigma )\). Using Lemma 1, we have \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi _1\iff w\models P(\varphi _1,\sigma )\) and \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi _2\iff w\models P(\varphi _2,\sigma )\). We deduce that \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi _1 \iff \sigma \cdot w\not \models \varphi _2\). Let us consider \(w\in {\varSigma }^\omega \). If \(\sigma \cdot w\models \varphi _1\), we have \(\sigma \cdot w\models \varphi _1\vee \varphi _2\). Else (\(\sigma \cdot w\not \models \varphi _1\)), we have \(\sigma \cdot w\models \varphi _2\), and then \(\sigma \cdot w\models \varphi _2\vee \varphi _1\). That is, \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi _1\vee \varphi _2\), i.e., \(\sigma \in {{\mathrm{good}}}(\varphi _1\vee \varphi _2)\).

    Let us suppose that \(P(\varphi ,\sigma )=\bot \). In this case, we have \(P(\varphi _1,\sigma )=\bot \) and \(P(\varphi _2,\sigma )=\bot \). Similarly, we can apply the induction hypothesis on \(\varphi _1\) and \(\varphi _2\) to find that \(\sigma \) is bad prefix of both \(\varphi _1\) and \(\varphi _2\), and is thus a bad prefix of \(\varphi _1\vee \varphi _2\) (using the \(\mathrm {LTL}\) semantics of operator \(\vee \)).

  • Case \(\varphi =\varphi _1\wedge \varphi _2\). This case is symmetrical to the previous one.

  • Case \(\varphi =\mathbf {G}\varphi '\). Recall that, according to the progression function for operator \(\mathbf {G}\), \(P(\mathbf {G}\varphi ',\sigma )=P(\varphi ',\sigma ) \wedge \mathbf {G}\varphi '\).

    Let us suppose that \(P(\varphi ,\sigma )=\top \). It means that \(P(\varphi ',\sigma )=\top \) and \(\mathbf {G}\varphi '=\top \). This case reduces to the case where \(\varphi =\top \).

    Let us suppose that \(P(\varphi ,\sigma )=\bot \). We distinguish two cases.

    • If \(P(\varphi ',\sigma )=\bot \) or \(\mathbf {G}\varphi '=\bot \). We distinguish again two sub-cases.

      • Sub-case \(P(\varphi ',\sigma )=\bot \). Using the induction hypothesis on \(\varphi '\), we deduce that \(\sigma \in {{\mathrm{bad}}}(\varphi ')\), i.e., \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\not \models \varphi '\). Following the \(\mathrm {LTL}\) semantics of operator \(\mathbf {G}\), we deduce that \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\not \models \mathbf {G}\varphi '\), i.e., \(\sigma \in {{\mathrm{bad}}}(\mathbf {G}\varphi ')\).

      • Sub-case \(\mathbf {G}\varphi '=\bot \). This case reduces to the case where \(\varphi =\bot \).

    • If \(P(\varphi ',\sigma )\ne \bot \) and \(\mathbf {G}\varphi '\ne \bot \). From \(P(\varphi ',\sigma ) \wedge \mathbf {G}\varphi '=\bot \), we deduce that \(P(\varphi ',\sigma ) = \lnot \mathbf {G}\varphi '\). Using Lemma 1 on \(\varphi '\), we have \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi ' \iff w\models P(\varphi ',\sigma )\). Thus \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi ' \iff w\not \models \mathbf {G}\varphi '\). Let us consider \(w\in {\varSigma }^\omega \). If \(\sigma \cdot w\models \varphi '\), then we have \(w\not \models \mathbf {G}\varphi '\). According to the \(\mathrm {LTL}\) semantics of operator \(\mathbf {G}\), it means that \(\exists i\in \mathbb {N}^{\ge 0}.\ w^i\not \models \varphi '\). Thus, still following the \(\mathrm {LTL}\) semantics of operator \(\mathbf {G}\), \((\sigma \cdot w)^{i+1}\not \models \varphi '\), and, consequently \(\sigma \cdot w\not \models \mathbf {G}\varphi '\). Else (\(\sigma \cdot w\not \models \varphi '\)), we have directly \(\sigma \cdot w\not \models \mathbf {G}\varphi '\).

  • Case \(\varphi =\mathbf {F}\varphi '\). Recall that, according to the progression function for operator \(\mathbf {F}\), \(P(\mathbf {F}\varphi ',\sigma )=P(\varphi ',\sigma )\vee \mathbf {F}\varphi '\).

    Let us suppose that \(P(\varphi ,\sigma )=\top \). We distinguish two cases.

    • If \(P(\varphi ',\sigma )=\top \) or \(\mathbf {F}\varphi '=\top \).

      • Sub-case \(P(\varphi ',\sigma )=\top \). Following the previous reasoning, using the induction hypothesis on \(\varphi '\), the \(\mathrm {LTL}\) semantics of operator \(\mathbf {F}\), and the definition of good prefixes, we obtain the expected result.

      • Sub-case \(\mathbf {F}\varphi '=\top \). This case reduces to the case where \(\varphi =\top \).

    • If \(P(\varphi ',\sigma )\ne \top \) and \(\mathbf {F}\varphi '\ne \top \). From \(P(\varphi ',\sigma ) \vee \mathbf {G}\varphi '=\bot \), we deduce that \(P(\varphi ',\sigma ) = \lnot \mathbf {F}\varphi '\). Using Lemma 1 on \(\varphi '\), we have \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi ' \iff w\models P(\varphi ',\sigma )\). We thus have \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi ' \iff w\not \models \mathbf {F}\varphi '\). Let us consider \(w\in {\varSigma }^\omega \). If \(\sigma \cdot w\models \varphi '\), using the \(\mathrm {LTL}\) semantics of operator \(\mathbf {F}\), we have directly \(\sigma \cdot w\models \mathbf {F}\varphi '\). Else (\(\sigma \cdot w\not \models \varphi '\)), we have \(w\models \mathbf {F}\varphi '\). According to the \(\mathrm {LTL}\) semantics of operator \(\mathbf {F}\), it means that \(\exists i\in \mathbb {N}^{\ge 0}.\ w^i\models \varphi '\), and thus \((\sigma \cdot w)^{i+1}\models \varphi '\). Consequently \(\sigma \cdot w\models \mathbf {F}\varphi '\). That is, \(\sigma \in {{\mathrm{good}}}(\mathbf {F}\varphi ')\).

    Let us suppose that \(P(\varphi ,\sigma )=\bot \). It means that \(P(\varphi ',\sigma )=\bot \) and \(\mathbf {F}\varphi '=\bot \). A similar reasoning as the one used for the case \(\varphi =\mathbf {G}\varphi '\) and \(P(\varphi ,\sigma )=\top \) can be applied to obtain the expected result.

  • Case \(\varphi =\mathbf {X}\varphi '\). Recall that, according to the progression function for operator \(\mathbf {X}\), \(P(\mathbf {X}\varphi ',\sigma )=\varphi '\).

    Let us suppose that \(P(\varphi ,\sigma )=\top \). It means that \(\varphi '=\top \). According to the \(\mathrm {LTL}\) semantics of \(\top \), we have \(\forall w\in {\varSigma }^\omega .\ w\models \varphi '\). Then, \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \mathbf {X}\varphi '=\varphi \). That is, \(\sigma \in {{\mathrm{good}}}(\mathbf {X}\varphi ')\).

    Let us suppose that \(P(\varphi ,\sigma )=\bot \). It means that \(\varphi '=\bot \). According to the \(\mathrm {LTL}\) semantics of \(\bot \), we have \(\forall w\in {\varSigma }^\omega .\ w\not \models \varphi '\). Then, \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\not \models \mathbf {X}\varphi '=\varphi \). That is, \(\sigma \in {{\mathrm{bad}}}(\mathbf {X}\varphi ')\).

  • Case \(\varphi =\varphi _1 \mathbf {U}\varphi _2\). Recall that, according to the progression function for operator \(\mathbf {U}\), \(P(\varphi _1\mathbf {U}\varphi _2,\sigma )=P(\varphi _2,\sigma )\vee (P(\varphi _1,\sigma )\wedge \varphi _1 \mathbf {U}\varphi _2)\).

    Let us suppose that \(P(\varphi ,\sigma )=\top \). We distinguish two cases.

    • If \(P(\varphi _2,\sigma )=\top \) or \(P(\varphi _1,\sigma )\wedge \varphi _1 \mathbf {U}\varphi _2=\top \).

      • Sub-case \(P(\varphi _2,\sigma )=\top \). Using the induction hypothesis on \(\varphi _2\), we have \(\sigma \in {{\mathrm{good}}}(\varphi _2)\). Let us consider \(w\in {\varSigma }^\omega \), we have \(\sigma \cdot w\in \mathcal {L}(\varphi _2)\), i.e., \((\sigma \cdot w)^0\models \varphi _1\mathbf {U}\varphi _2\). According to the \(\mathrm {LTL}\) semantics of \(\mathbf {U}\), we have \(\sigma \cdot w\models \varphi _1\vee \varphi _2\), i.e., \(\sigma \cdot w \in \mathcal {L}(\varphi _1\mathbf {U}\varphi _2)\). We deduce that \(\sigma \in {{\mathrm{good}}}(\varphi _1 \mathbf {U}\varphi _2)\).

      • Sub-case \(P(\varphi _1,\sigma ) \wedge \varphi _1\mathbf {U}\varphi _2=\top \). Necessarily, \(\varphi _1\mathbf {U}\varphi _2=\top \) and this case reduces to the first one already treated.

    • If \(P(\varphi _2,\sigma )\ne \top \) and \(P(\varphi _1,\sigma )\wedge \varphi _1 \mathbf {U}\varphi _2\ne \top \). From \(P(\varphi _1\mathbf {U}\varphi _2,\sigma )=\top \), we deduce that \(P(\varphi _2,\sigma ) = \lnot (P(\varphi _1,\sigma )\wedge \varphi _1 \mathbf {U}\varphi _2)\). Applying Lemma 1 to \(\varphi _2\), we obtain \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi _2 \iff w\models P(\varphi _2,\sigma )\). We thus have \(\forall w\in {\varSigma }^\omega .\ \sigma \cdot w\models \varphi _2 \iff w \not \models P(\varphi _1,\sigma )\wedge \varphi _1 \mathbf {U}\varphi _2\). Let us consider \(w\in {\varSigma }^\omega \). Let us distinguish two cases. If \(\sigma \cdot w\models \varphi _2\), according to the \(\mathrm {LTL}\) semantics of \(\mathbf {U}\), we have \(\sigma \cdot w\models \varphi _1\mathbf {U}\varphi _2\). Else (\(\sigma \cdot w\not \models \varphi _2\)), it implies that \(\sigma \cdot w\models P(\varphi _1,\sigma )\wedge \varphi _1 \mathbf {U}\varphi _2\), and, in particular \(\sigma \cdot w\models \varphi _1 \mathbf {U}\varphi _2\). That is, in both cases, \(\sigma \in {{\mathrm{good}}}(\varphi _1\mathbf {U}\varphi _2)\). \(\square \)

1.1.3 Some intermediate lemmas

The following lemma states some equality’s that directly follow from an inductive application of the definition of the progression function on events. (Proofs of Lemmas 3 and 4 are given p. 14 and 36, respectively).

Lemma 5

Given some formulae \(\varphi ,\varphi _1,\varphi _2\in \mathrm {LTL}\,\), and a trace \(u\in {\varSigma }^+\), the progression function can be extended to the trace u by successively applying the previously defined progression function to each event of u in order. Moreover, we have: \(\forall \varphi ,\varphi _1,\varphi _2\in \mathrm {LTL}\,.\forall u\in {\varSigma }^+\).

$$\begin{aligned} \begin{array}{rcl} P(\top ,u) &{}=&{} \top ,\\ P(\bot ,u) &{}=&{} \bot ,\\ P(p\in {{{AP}_{,}}}u) &{}=&{} \top \ \text {if}\ p\in u(0), \bot \ \text {otherwise}, \\ P(\lnot \varphi ,u)&{}=&{}\lnot P(\varphi ,u),\\ P(\varphi _1\vee \varphi _2,u)&{}=&{}P(\varphi _1,u)\vee P(\varphi _2,u),\\ P(\varphi _1\wedge \varphi _2,u)&{}=&{}P(\varphi _1,u)\wedge P(\varphi _2,u),\\ P(\mathbf {G}\varphi ,u)&{}=&{} {\bigwedge }_{i=0}^{|u|-1} P(\varphi ,u^i)\wedge \mathbf {G}\varphi ,\\ P(\mathbf {F}\varphi ,u)&{}=&{} {\bigvee }_{i=0}^{|u|-1} P(\varphi ,u^i)\vee \mathbf {F}\varphi ,\\ P(\mathbf {X}\varphi ,u)&{}=&{} \left\{ \begin{array}{ll} \varphi &{} \quad \text {if } |u|=1 \\ P(\varphi ,u^1) &{} \quad \text {otherwise} \end{array}\right. \\ P(\varphi _1\mathbf {U}\varphi _2,u) &{}=&{} \left\{ \begin{array}{ll} P(\varphi _2,u) \vee P(\varphi _1,u)\wedge \varphi _1\mathbf {U}\varphi _2 &{} \quad \text {if } \; |u|=1,\\ {\bigvee }_{i=0}^{|u|-1} \big (P(\varphi _2,u^i) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j)\big ) \vee {\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2 &{} \quad \text {otherwise}. \end{array}\right. \\ \end{array} \end{aligned}$$

Proof

The proof is done by two inductions: an induction on the length of the trace u (which is also the number of times the progression function is applied) and a structural induction on \(\varphi \in \mathrm {LTL}\,\).

Base case: \(u=\sigma \in {\varSigma }, |u|=1\).

In this case, the result holds thanks to the definition of the progression function.

Induction case:

Let us suppose that the lemma holds for any trace \(u\in {\varSigma }^+\) of some length \(t\in \mathbb {N}\) and let us consider the trace \(u\cdot \sigma \in {\varSigma }^+\), we perform a structural induction on \(\varphi \in \mathrm {LTL}\,\).

Structural Base case: \(\varphi \in \{\top , \bot ,p\in {{{AP}_{}}}\}\).

  • Case \(\varphi =\top \). In this case the result is trivial since we have:

    $$\begin{aligned} \begin{array}{rcll} P(\top ,u\cdot \sigma ) &{}= &{}P(P(\top ,u),\sigma ) &{} \quad \text {(extended progression)}\\ &{}=&{}P(\top ,\sigma ) &{} \quad \text {(induction hypothesis on } u\text {)}\\ &{}=&{}\top &{} \quad \text {(progression on events)} \end{array} \end{aligned}$$
  • Case \(\varphi =\bot \). This case is symmetrical to the previous one.

  • Case \(\varphi = p\in {{{AP}_{}}}\). Let us distinguish two cases: \(p\in u(0)\) or \(p\notin u(0)\).

    • If \(p\in u(0)\), we have:

      $$\begin{aligned} \begin{array}{rcll} P(p,u\cdot \sigma ) &{}= &{}P(P(p,u),\sigma ) &{} \quad \text {(extended progression)}\\ &{}=&{}P(\top ,\sigma ) &{} \quad \text {(induction hypothesis on } u\text {)}\\ &{}=&{}\top &{} \quad \text {(progression on events)} \end{array} \end{aligned}$$
    • If \(p\notin u(0)\), we have:

      $$\begin{aligned} \begin{array}{rcll} P(p,u\cdot \sigma ) &{}= &{}P(P(p,u),\sigma ) &{} \quad \text {(extended progression)}\\ &{}=&{}P(\bot ,\sigma ) &{} \quad \text {(induction hypothesis on } u\text {)}\\ &{}=&{}\bot &{} \quad \text {(progression on events)} \end{array} \end{aligned}$$

Induction Case: \(\varphi \in \{\lnot \varphi ', \varphi _1\vee \varphi _2,\varphi _1\wedge \varphi _2, \mathbf {G}\varphi ',\mathbf {F}\varphi ',\mathbf {X}\varphi ',\varphi _1\mathbf {U}\varphi _2\}\). Our induction hypothesis states that the lemma holds for some formulae \(\varphi ',\varphi _1,\varphi _2\in \mathrm {LTL}\,\).

  • Case \(\varphi =\lnot \varphi '\). We have:

    $$\begin{aligned} \begin{array}{rcll} P(\lnot \varphi ',u\cdot \sigma ) &{}= &{}P(P(\lnot \varphi ',u),\sigma ) &{} \quad \text {(extended progression)}\\ &{}=&{} P(\lnot P(\varphi ',u),\sigma ) &{} \quad \text {(induction hypothesis on } u\text { and } \varphi '\text {)}\\ &{}=&{}\lnot P( P(\varphi ',u),\sigma ) &{} \quad \text {(progression on events)}\\ &{}=&{}\lnot P(\varphi ',u\cdot \sigma ) &{} \quad \text {(extended progression)} \end{array} \end{aligned}$$
  • Case \(\varphi =\mathbf {X}\varphi '\). We have:

    $$\begin{aligned} \begin{array}{rcll} P(\mathbf {X}\varphi ',u\cdot \sigma ) &{} = &{} P(P(\mathbf {X}\varphi ',u),\sigma ) &{} \quad \text {(extended progression)}\\ &{}=&{} P(P(\varphi ',u^1),\sigma ) &{} \quad \text {(induction hypothesis on } u \text { and } \varphi '\text {)}\\ &{}=&{}P(\varphi ',u^1\sigma ) &{} \quad \text {(extended progression)}\\ &{}=&{}P(\varphi ',(u\cdot \sigma )^1) &{}\\ \end{array} \end{aligned}$$
  • Case \(\varphi =\varphi _1\vee \varphi _2\). We have:

$$\begin{aligned} \begin{array}{rcll} P(\varphi _1\vee \varphi _2,u\cdot \sigma ) &{}= &{}P(P(\varphi _1\vee \varphi _2,u),\sigma ) &{} \quad \text {(extended progression)}\\ &{}=&{} P(P(\varphi _1,u)\vee P(\varphi _2,u),\sigma ) &{} \quad \text {(induction hypothesis on } u \text { and } \varphi _1,\varphi _2\text {)}\\ &{}=&{} P(P(\varphi _1,u),\sigma ) \vee P(P(\varphi _2,u),\sigma ) &{} \quad \text {(progression on events)}\\ &{}=&{} P(\varphi _1,u\cdot \sigma ) \vee P(\varphi _2,u\cdot \sigma ) &{} \quad \text {(extended progression)}\\ \end{array} \end{aligned}$$
  • Case \(\varphi =\varphi _1\wedge \varphi _2\). This case is similar to the previous one.

  • Case \(\varphi =\mathbf {G}\varphi '\). We have:

$$\begin{aligned} \begin{array}{l@{\quad }l} P(\mathbf {G}\varphi ',u\cdot \sigma ) &{} \\ \quad = P(P(\mathbf {G}\varphi ',u),\sigma ) &{} \text {(extended progression)}\\ \quad = P({\bigwedge }_{i=0}^{|u|-1} P(\varphi ',u^i)\wedge \mathbf {G}\varphi ',\sigma ) &{} \text {(induction hypothesis on } u \text { and } \varphi '\text {)}\\ \quad = P({\bigwedge }_{i=0}^{|u|-1} P(\varphi ',u^i),\sigma ) \wedge P( \mathbf {G}\varphi ',\sigma ) &{} \text {(progression on events for } \wedge \text {)}\\ \quad = {\bigwedge }_{i=0}^{|u|-1} P(P(\varphi ',u^i),\sigma ) \wedge P( \mathbf {G}\varphi ',\sigma ) &{} \text {(extended progression for } \wedge \text {)}\\ \quad = {\bigwedge }_{i=0}^{|u|-1} P(\varphi ',u^i\cdot \sigma ) \wedge P( \mathbf {G}\varphi ',\sigma ) &{} \text {(extended progression)}\\ \quad = {\bigwedge }_{i=0}^{|u|-1} P(\varphi ',u^i\cdot \sigma ) \wedge P(\varphi ',\sigma )\wedge \mathbf {G}\varphi ' &{} \text {(progression on events for } \mathbf {G}\text {)}\\ \quad = {\bigwedge }_{i=0}^{|u\cdot \sigma |-2} P(\varphi ',(u\cdot \sigma )^i) \wedge P(\varphi ',(u\cdot \sigma )^{|u\cdot \sigma |-1})\wedge \mathbf {G}\varphi ' &{} \text {(}u^i\cdot \sigma =(u\cdot \sigma )^i \text { and } \sigma =(u\cdot \sigma )^{|u\cdot \sigma |-1}\text {)}\\ \quad = {\bigwedge }_{i=0}^{|u\cdot \sigma |-1} P(\varphi ',(u\cdot \sigma )^i)\wedge \mathbf {G}\varphi ' &{} \end{array} \end{aligned}$$
  • Case \(\varphi =\mathbf {F}\varphi '\). We have:

    $$\begin{aligned} \begin{array}{l@{\quad }l} P(\mathbf {F}\varphi ',u\cdot \sigma ) &{} \\ \quad = P(P(\mathbf {F}\varphi ',u),\sigma ) &{} \text {(extended progression)}\\ \quad = P({\bigvee }_{i=0}^{|u|-1} P(\varphi ',u^i)\vee \mathbf {F}\varphi ',\sigma ) &{} \text {(induction hypothesis on } u \text { and } \varphi '\text {)}\\ \quad = P({\bigvee }_{i=0}^{|u|-1} P(\varphi ',u^i),\sigma ) \vee P( \mathbf {F}\varphi ',\sigma ) &{} \text {(progression on events)}\\ \quad = {\bigvee }_{i=0}^{|u|-1} P(\varphi ',u^i\cdot \sigma ) \vee P( \mathbf {F}\varphi ',\sigma ) &{} \text {(extended progression for } \vee \text {)}\\ \quad = {\bigvee }_{i=0}^{|u|-1} P(\varphi ',u^i\cdot \sigma ) \vee P(\varphi ',\sigma )\vee \mathbf {F}\varphi ' &{} \text {(progression on events for } \mathbf {F}\text {)}\\ \quad = {\bigvee }_{i=0}^{|u\cdot \sigma |-2} P(\varphi ',(u\cdot \sigma )^i) \vee P(\varphi ',(u\cdot \sigma )^{|u\cdot \sigma |-1}) \vee \mathbf {F}\varphi ' &{} \text {(}u^i\cdot \sigma =(u\cdot \sigma )^i \text { and }\sigma =(u\cdot \sigma )^{|u\cdot \sigma |-1}\text {)}\\ \quad = {\bigvee }_{i=0}^{|u\cdot \sigma |-1} P(\varphi ',(u\cdot \sigma )^i)\vee \mathbf {F}\varphi ' &{} \end{array} \end{aligned}$$
  • Case \(\varphi =\varphi _1\mathbf {U}\varphi _2\). We have:

$$\begin{aligned} \begin{array}{l} P(\varphi _1\mathbf {U}\varphi _2,u\cdot \sigma ) \\ \text {(extended progression)}\\ \quad = P(P(\varphi _1\mathbf {U}\varphi _2,u),\sigma )\\ \text {(induction hypothesis on } u\text {, and structural induction hypothesis on } \varphi _1 \text { and } \varphi _2\text {)}\\ \quad = P\Big ({\bigvee }_{i=0}^{|u|-1} \big (P(\varphi _2,u^i) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j)\big ) \vee {\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2,\sigma \Big )\\ \text {(progression on events for } \vee \text {)}\\ \quad = P\Big ({\bigvee }_{i=0}^{|u|-1} \big (P(\varphi _2,u^i) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j)\big ),\sigma ) \vee P({\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2,\sigma \Big )\\ \text {(progression on events for } \wedge \text { and } \vee \text {)}\\ \quad = {\bigvee }_{i=0}^{|u|-1} \big (P(P(\varphi _2,u^i),\sigma ) \wedge {\bigwedge }_{j=0}^{i-1} P(P(\varphi _1,u^j),\sigma )\big ) \vee {\bigwedge }_{i=0}^{|u|-1} P(P(\varphi _1,u^i),\sigma ) \\ \quad \quad \wedge P(\varphi _1\mathbf {U}\varphi _2,\sigma )\\ \text {(extended progression)}\\ \quad = {\bigvee }_{i=0}^{|u|-1} \big (P(\varphi _2,u^i\cdot \sigma ) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j\cdot \sigma )\big ) \vee {\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma )\wedge P(\varphi _1\mathbf {U}\varphi _2,\sigma ).\\ \end{array} \end{aligned}$$

Moreover:

$$\begin{aligned} \begin{array}{l} {\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma )\wedge P(\varphi _1\mathbf {U}\varphi _2,\sigma ) \\ \text {(progression on events for } \mathbf {U}\text {)}\\ \quad = {\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma )\wedge (P(\varphi _2,\sigma )\vee P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2) \\ \text {(distribution of } \wedge \text { over } \vee \text {)}\\ \quad = \big ({\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma ) \wedge P(\varphi _2,\sigma ) \big )\vee \big ({\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma )\wedge P(\varphi _1,\sigma )\wedge \varphi _1\mathbf {U}\varphi _2\big ) \\ \text {(}\sigma =(u\cdot \sigma )^{|u\cdot \sigma |-1} \text { and elimination of } P(\varphi _1,\sigma )\text {)}\\ \quad = \big ({\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma ) \wedge P(\varphi _2,\sigma ) \big )\vee \big ({\bigwedge }_{i=0}^{|u\cdot \sigma |-1} P(\varphi _1,u^i\cdot \sigma )\wedge \varphi _1\mathbf {U}\varphi _2\big ) \\ \end{array} \end{aligned}$$

Furthermore:

$$\begin{aligned} \begin{array}{l} {\bigvee }_{i=0}^{|u|-1} \big (P(\varphi _2,u^i\cdot \sigma ) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j\cdot \sigma )\big )\vee \big ({\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma ) \wedge P(\varphi _2,\sigma ) \big )\\ \quad \text {(variable renaming)}\\ \quad ={\bigvee }_{i=0}^{|u|-1} \big (P(\varphi _2,u^i\cdot \sigma ) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j\cdot \sigma )\big )\vee \big ( P(\varphi _2,\sigma )\wedge {\bigwedge }_{j=0}^{|u|-1} P(\varphi _1,u^j\cdot \sigma ) \big )\\ \qquad \text {(}\sigma =(u\cdot \sigma )^{|u\cdot \sigma |-1}\text {)}\\ \quad = {\bigvee }_{i=0}^{|u\cdot \sigma |-2} \big (P(\varphi _2,(u\cdot \sigma )^i) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j\cdot \sigma )\big )\vee \big ( P(\varphi _2,(u\cdot \sigma )^{|u\cdot \sigma |-1}) \\ \quad \quad \wedge {\bigwedge }_{j=0}^{|u\cdot \sigma |-2} P(\varphi _1,u^j\cdot \sigma ) \big )\\ \quad = {\bigvee }_{i=0}^{|u\cdot \sigma |-1} \big (P(\varphi _2,(u\cdot \sigma )^i) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,(u\cdot \sigma )^j)\big ). \end{array} \end{aligned}$$

Finally:

$$\begin{aligned} \begin{array}{l} P(\varphi _1\mathbf {U}\varphi _2,u\cdot \sigma ) \\ \quad = {\bigvee }_{i=0}^{|u|-1} \big (P(\varphi _2,u^i\cdot \sigma ) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j\cdot \sigma )\big ) \vee \big ({\bigwedge }_{i=0}^{|u|-1} P(\varphi _1,u^i\cdot \sigma ) \wedge P(\varphi _2,\sigma ) \big )\\ \qquad \qquad \vee \big ({\bigwedge }_{i=0}^{|u\cdot \sigma |-1} P(\varphi _1,u^i\cdot \sigma )\wedge \varphi _1\mathbf {U}\varphi _2\big )\\ \quad ={\bigvee }_{i=0}^{|u\cdot \sigma |-1} \big (P(\varphi _2,u^i\cdot \sigma ) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,u^j\cdot \sigma )\big ) \vee \big ({\bigwedge }_{i=0}^{|u\cdot \sigma |-1} P(\varphi _1,u^i\cdot \sigma )\wedge \varphi _1\mathbf {U}\varphi _2\big )\\ \quad ={\bigvee }_{i=0}^{|u\cdot \sigma |-1} \big (P(\varphi _2,(u\cdot \sigma )^i) \wedge {\bigwedge }_{j=0}^{i-1} P(\varphi _1,(u\cdot \sigma )^j)\big ) \\ \qquad \qquad \vee \big ({\bigwedge }_{i=0}^{|u\cdot \sigma |-1} P(\varphi _1,(u\cdot \sigma )^i)\wedge \varphi _1\mathbf {U}\varphi _2\big ) \end{array} \end{aligned}$$

\(\square \)

We introduce another intermediate lemma, which is a consequence of the definition of \(\mathrm {LTL}\) semantics (Definition 1) and the definition of the progression function (Definition 5). This lemma will be useful in the remaining proofs. This lemma states that the progression function “mimics” the semantics of \(\mathrm {LTL}\) on a trace \(u\in {\varSigma }^*\).

Lemma 6

Let \(\varphi \) be an \(\mathrm {LTL}\)   formula, \(u\in {\varSigma }^*\) a finite trace and \(w\in {\varSigma }^\omega \) an infinite trace, we have \(u\cdot w\models \varphi \iff w\models P(\varphi ,u)\).

Proof

We shall prove the following statement:

$$\begin{aligned} \forall u\in {\varSigma }^*. \forall w\in {\varSigma }^\omega . \forall \varphi \in \mathrm {LTL}\,.\ u \cdot w\models \varphi \iff w\models P(\varphi ,u). \end{aligned}$$

Let us consider \(u\in {\varSigma }^+\), the proof is done by a structural induction on \(\varphi \in \mathrm {LTL}\,\) (when \(u = \epsilon \), the lemma holds vacuously).

Base case: \(\varphi \in \{\top ,\bot ,p\in {{{AP}_{}}}\}\).

  • Case \(\varphi =\top \). This case is trivial since, using Lemma 5 on \(\top \) and u, we have \(P(\top ,u)=\top \). Moreover, according to the \(\mathrm {LTL}\) semantics of \(\top \), \(\forall w\in {\varSigma }^\omega .\ u\cdot w\models \top \).

  • Case \(\varphi =\bot \). This case is symmetrical to the previous one.

  • Case \(\varphi = p\in {{{AP}_{}}}\).

    • Let us suppose that \(u\cdot w\models p\). By applying Lemma 5 on \(\top \) and u, we have \(P(u,p)=\top \). Moreover, due to the \(\mathrm {LTL}\) semantics of \(\top \), we have \(\forall w\in {\varSigma }^\omega .\ w\models \top =P(u,p)\).

    • Let us suppose that \(w\models P(p,u)\). Since \(P(p,u)\in \{\top ,\bot \}\), we have necessarily \(P(p,u)=\top \). According to the progression function, \(P(p,u)=\top \) necessitates that \(p\in u(0)\). Using the \(\mathrm {LTL}\) semantics of atomic propositions, we deduce that \((u\cdot w)^0\models p\), i.e., \(u\cdot w\models p\).

Induction case: \(\varphi \in \{\lnot \varphi ', \varphi _1\vee \varphi _2,\varphi _1\wedge \varphi _2, \mathbf {G}\varphi ',\mathbf {F}\varphi ',\mathbf {X}\varphi ',\varphi _1\mathbf {U}\varphi _2\}\). Our induction hypothesis states that the lemma holds for some formulae \(\varphi ',\varphi _1,\varphi _2\in \mathrm {LTL}\,\).

  • Case \(\varphi =\varphi _1\vee \varphi _2\). Recall that, by applying Lemma 5 on \(\varphi _1\vee \varphi _2\) and u, we have \(P(\varphi _1\vee \varphi _2,u)=P(\varphi _1,u)\vee P(\varphi _2,u)\).

    • Let us suppose that \(u\cdot w\models \varphi _1\vee \varphi _2\). Let us distinguish two cases: \(\varphi _1\vee \varphi _2=\top \) and \(\varphi _1\vee \varphi _2\ne \top \). If \(\varphi _1\vee \varphi _2=\top \), then this case reduces to the case where \(\varphi =\top \) already treated. If \(\varphi _1\vee \varphi _2\ne \top \), it means that either \(u\cdot w\models \varphi _1\) or \(u\cdot w\models \varphi _2\). Let us treat the case where \(u\cdot w\models \varphi _1\) (the other case is similar). From \(u\cdot w\models \varphi _1\), we can apply the structural induction hypothesis on \(\varphi _1\) to obtain \(w\models P(\varphi _1,u)\), and then, \(w\models P(\varphi _1,u)\vee P(\varphi _2,u) = P(\varphi _1\vee \varphi _2,u)\).

    • Let us suppose that \(w\models P(\varphi _1\vee \varphi _2,u)\). Let us again distinguish two cases. If \(P(\varphi _1,u)\vee P(\varphi _2,u)=\top \), then it reduces to the case where \(\varphi =\top \) already treated. If \(P(\varphi _1,u)\vee P(\varphi _2,u)\ne \top \), then we have either \(w\models P(\varphi _1,u)\) or \(w\models P(\varphi _2,u)\). Let us treat the case where \(w\models P(\varphi _1,u)\) (the other case is similar). From \(w\models P(\varphi _1,u)\), we can apply the structural induction hypothesis on \(\varphi _1\) to obtain \(u\cdot w\models \varphi _1\), and thus, using the \(\mathrm {LTL}\) semantics of \(\vee \), \(u\cdot w\models \varphi _1\vee \varphi _2\).

  • Case \(\varphi =\varphi _1\wedge \varphi _2\). This case is similar to the previous one.

  • Case \(\varphi =\mathbf {G}\varphi '\). Recall that, by applying Lemma 5 on \(\mathbf {G}\varphi '\) and u, we have \( P(\mathbf {G}\varphi ', u)=\bigwedge _{i=0}^{|u|-1} P(\varphi ',\) \(u^i)\wedge \mathbf {G}\varphi '\).

    • Let us suppose that \(u\cdot w\models \mathbf {G}\varphi '\). From the \(\mathrm {LTL}\) semantics of operator \(\mathbf {G}\), we have \(\forall i\in \mathbb {N}^{\ge 0}.\ (u\cdot w)^i\models \varphi '\). In particular, it implies that \(\forall 0\le i\le |u|-1.\ u^i \cdot w\models \varphi '\) and \(\forall i\ge 0.\ ((u\cdot w)^{|u|-1})^i \models \varphi '\). Using, \(\forall 0\le i\le |u|-1.\ u^i\cdot w\models \varphi '\) and applying the structural induction hypothesis on \(\varphi '\) and the \(u_i\)’s, we obtain \(\forall 0\le i\le |u|-1.\ w\models P(\varphi ',u^i)\), and thus \(w\models \bigwedge _{i=0}^{|u|-1} P(\varphi ',u^i)\). Using \(\forall i\ge 0.\ w^i=((u\cdot w)^{|u|-1})^i \models \varphi '\), we obtain \(w\models \mathbf {G}\varphi '\). As expected, according to the \(\mathrm {LTL}\) semantics of \(\wedge \), we have \(w\models \bigwedge _{i=0}^{|u|-1} P(\varphi ',u^i)\wedge \mathbf {G}\varphi ' =P(\mathbf {G}\varphi ',u)\).

    • Let us suppose that \(w\models P(\mathbf {G}\varphi ',u)\). We have \(\forall 0\le i\le |u|-1.\ w\models P(\varphi ',u^i)\) and \(w\models \mathbf {G}\varphi '\). Using the structural induction hypothesis on \(\varphi '\) and the \(u^i\)’s, it follows that \(\forall 0\le i\le |u|-1.\ u^i\cdot w=(u\cdot w)^i\models \varphi '\). Using the semantics of operator \(\mathbf {G}\), from \(w\models \mathbf {G}\varphi '\) and \(\forall 0\le i\le |u|-1.\ u^i\cdot w=(u\cdot w)^i\models \varphi '\), we deduce \(u\cdot w\models \mathbf {G}\varphi '\).

  • Case \(\varphi =\mathbf {F}\varphi '\). This case is similar to the previous one.

  • Case \(\varphi =\mathbf {X}\varphi '\). Recall that, by applying Lemma 5 on u and \(\mathbf {X}\varphi '\), we have \(P(\mathbf {X}\varphi ',u)=P(\varphi ',u^1\cdot \sigma )\). Using the \(\mathrm {LTL}\) semantics of \(\mathbf {X}\), we have \(u\cdot w\models \mathbf {X}\varphi '\) iff \(u^1\cdot w\models \varphi '\). Thus we have \(u\cdot w\models \mathbf {X}\varphi '\) iff \(u^1\cdot \sigma \cdot w\models \varphi '\) iff (induction hypothesis on \(\varphi '\)) \(w\models P(\varphi ',u^1\cdot \sigma )=P(\mathbf {X}\varphi ',u)\).

  • Case \(\varphi =\lnot \varphi '\). Recall that, by applying Lemma 5 on u and \(\lnot \varphi '\), we have \(P(\lnot \varphi ',u)=\lnot P(\varphi ',u)\). Using the \(\mathrm {LTL}\) semantics of operator \(\lnot \), we have \(\forall \varphi \in \mathrm {LTL}\,. \forall w\in {\varSigma }^\omega .\ w\models \varphi \iff w\not \models \lnot \varphi \). Thus, we have \(u\cdot w\models \lnot \varphi '\) iff \(u\cdot w\not \models \varphi '\) iff (induction hypothesis on \(\varphi '\)) \(w\not \models P(\varphi ',u)\) iff \(w\models \lnot P(\varphi ',u)\) iff \(w\models P(\lnot \varphi ',u)\).

  • Case \(\varphi =\varphi _1\mathbf {U}\varphi _2\). Recall that, by applying Lemma 5 on u and \(\varphi _1\mathbf {U}\varphi _2'\), we have:

    $$\begin{aligned} P(\varphi _1\mathbf {U}\varphi _2',u)=\bigvee _{i=0}^{|u|-1} \big (P(\varphi _2,u^i) \wedge \bigwedge _{j=0}^{i-1} P(\varphi _1,u^j)\big ) \vee \bigwedge _{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2. \end{aligned}$$
    • Let us suppose that \(u\cdot w\models \varphi _1\mathbf {U}\varphi _2\). According to the \(\mathrm {LTL}\) semantics of operator \(\mathbf {U}\), \(\exists k\in \mathbb {N}^{\ge 0}.\ (u\cdot w)^k\models \varphi _2 \wedge \forall 0\le l < k.\ (u\cdot w)^l\models \varphi _1\). Let us distinguish two cases: \(k>|u|\) and \(k\le |u|\).

      • If \(k>|u|\), then we have in particular \(\forall 0\le l \le |u|-1.\ u^l\cdot w\models \varphi _1\). Applying the structural induction hypothesis on \(\varphi _1\) and the \(u^l\)’s, we find \(\forall 0\le l \le |u|.\ w\models P(\varphi _1,u^l)\), i.e., \(w\models \bigwedge _{l=0}^{|u|-1} P(\varphi _1,u^l)\). From \((\sigma \cdot w)^k\models \varphi _2\) and \(k>|u|-1\), we deduce that \(\exists k'\ge 0.\ w^{k'}\models \varphi _2\) and \(k'=k-|u|+1\). Furthermore, we have \(\forall 0\le i\le k'.\ ((u\cdot w)^{|u|-1})^{k'}= w\models P(\varphi _1,u)\), i.e., \(w\models \bigwedge _{i=0}^{k'} P(\varphi _1,u^i)\). Finally, \(w\models P(\varphi _1\mathbf {U}\varphi _2,u)\).

      • If \(k\le |u|-1\), then from \((u\cdot w)^k\models \varphi _2\), we have \(u^k\cdot w\models \varphi _2\). Using the induction hypothesis on \(\varphi _2\) and \(u^k\), we have \(w\models P(\varphi _2,u^k)\). Moreover, using \(\forall l\le |k|.\ (u\cdot w)^l=u^l \cdot w\models \varphi _1\) and the induction hypothesis on \(\varphi _1\) and the \(u^l\)’s, we obtain \(\forall l\le |k|.\ (u\cdot w)^l=w\models P(\varphi _1,u^l)\). Finally, we have \(w\models \bigwedge _{l=0}^k \models P(\varphi _1,u^l) \wedge P(\varphi _2,u^k)\), and thus \(w\models P(\varphi _1\mathbf {U}\varphi _2,u)\).

    • Let us suppose that \(w\models P(\varphi _1\mathbf {U}\varphi _2,u)\). We distinguish two sub-cases:

      \(P(\varphi _1\mathbf {U}\varphi _2,u)=\top \) and \(P(\varphi _1\mathbf {U}\varphi _2,u)\ne \top \).

      • Sub-case \(P(\varphi _1\mathbf {U}\varphi _2,u)=\top \). We distinguish again three sub-cases:

        • Sub-case \(\bigvee _{i=0}^{|u|-1} \big (P(\varphi _2,u^i) \wedge \bigwedge _{j=0}^{i-1} P(\varphi _1,u^j)\big ) =\top \). Necessarily, we have \(\exists 0\le i\le |u|-1.\ P(\varphi _2,u^i) \wedge \bigwedge _{j=0}^{i-1} P(\varphi _1,u^j)=\top \). Otherwise, that would mean that \(\exists i_1,i_2\in [0,|u|-1].\ P(\varphi _2,u^{i_1}) \wedge \bigwedge _{j=0}^{i_1-1} P(\varphi _1,u^j) = \lnot P(\varphi _2,u^{i_2}) \wedge \bigwedge _{j=0}^{i_2-1} P(\varphi _1,u^j)\) and we would obtain a contradiction. From \(P(\varphi _2,u^i) \wedge \bigwedge _{j=0}^{i-1} P(\varphi _1,u^j)=\top \), we have \(P(\varphi _2,u^i) =\top \) and \(\bigwedge _{j=0}^{i-1} P(\varphi _1,u^j)=\top \). Using the induction hypothesis on \(\varphi _1\) and \(\varphi _2\), we obtain \(u^i \cdot w\models \varphi _2\) and \(\forall 0\le j< i.\ u^j \cdot w\models \varphi _1\). According to the \(\mathrm {LTL}\) semantics of operator \(\mathbf {U}\), it means \(u\cdot w\models \varphi _1\mathbf {U}\varphi _2\).

        • Sub-case \(\bigwedge _{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2=\top \). In this case, we have necessarily \(\varphi _1\mathbf {U}\varphi _2=\top \), and this case reduces to the case where \(\varphi =\top \).

        • Sub-case \(\bigvee _{i=0}^{|u|-1} \big (P(\varphi _2,u^i) \wedge \bigwedge _{j=0}^{i-1} P(\varphi _1,u^j)\big ) \ne \top \) and \(\bigwedge _{i=0}^{|u|-1} P(\varphi _1,\) \(u^i)\wedge \varphi _1\mathbf {U}\varphi _2\ne \top \). We have then

          $$\begin{aligned} \bigvee _{i=0}^{|u|-1} \big (P(\varphi _2,u^i) \wedge \bigwedge _{j=0}^{i-1} P(\varphi _1,u^j)\big ) = \lnot \Big (\bigwedge _{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2 \Big ). \end{aligned}$$

          Let us suppose that \(\forall i\in \mathbb {N}^{\ge 0}.\ (u\cdot \sigma )\not \models \varphi _2\). Following the induction hypothesis on \(\varphi _2\), it means in particular that \(\forall 0\le i\le |u|-1.\ w\not \models P(\varphi _2,u^i)\). Then, since \(w\models P(\varphi _2\mathbf {U}\varphi _2)\), it would imply that \(w\models \bigwedge _{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2\). But, from \(w\models \varphi _1\mathbf {U}\varphi _2\), we would obtain a contradiction according to the \(\mathrm {LTL}\) semantics. Hence, let us consider i the minimal \(k\in \mathbb {N}^{\ge 0}\) s.t. \((u\cdot w)^k\models \varphi _2\). If \(i>|u|-1\), then similarly we have \(w\models \bigwedge _{i=0}^{|u|-1} P(\varphi _1,u^i)\wedge \varphi _1\mathbf {U}\varphi _2\). It follows that \(\forall 0\le l\le |u|-1.\ u^l\cdot w\models \varphi _1\) and \(\forall |u|-1\le l <i.\ (u\cdot w)^l \models \varphi _1\), and thus \(u\cdot w\models \varphi _1\mathbf {U}\varphi _2\). Else (\(i\le |u|-1\)), we can follow a similar reasoning to obtain the expected result.

      • Sub-case \(P(\varphi _1\mathbf {U}\varphi _2,u)=\top \). Similarly, in this case, we can show that \(\exists k\in \mathbb {N}^{\ge 0}.\ (u\cdot w)^k\models \varphi _2\). Then we consider \(k_{\min }\) the minimal k s.t. \((u\cdot w)^k\models \varphi _2\). Then, we can show that \(\forall k'<k_\mathrm{min}.\ (u\cdot w)^{k'} \models \varphi _1\). And then \(u\cdot w\models \varphi _1\mathbf {U}\varphi _2\).

\(\square \)

1.1.4 Proof for Theorem 1 (p. 7)

We shall prove the following statement:

$$\begin{aligned}&\forall u\in {\varSigma }^*.\forall \varphi \in \mathrm {LTL}\,. \left( u \models _C \varphi = \top /\bot \implies u \models _3 \varphi = \top /\bot \right) \\&\quad \wedge \left( u \models _3 \varphi = {?} \implies u \models _C \varphi = {?}\right) \end{aligned}$$

The proof uses the definition of \(\mathrm {LTL}\) semantics (Definition 1), the definition of good and bad prefixes (Definition 5), the progression function (Definition 3), and Lemma 6.

Proof

According to Lemma 6, we have \(\forall u\in {\varSigma }^+.\forall w\in {\varSigma }^\omega .\ u\cdot w\models \varphi \iff w\models P(\varphi ,u)\). Consequently, we have \(\forall u\in {\varSigma }^+.\forall w\in {\varSigma }^\omega .\ u\cdot w\models \varphi \iff \forall u\in {\varSigma }^+.\forall w\in {\varSigma }^\omega .\ w\models P(\varphi ,u)\) and \(\forall u\in {\varSigma }^+.\forall w\in {\varSigma }^\omega .\ u\cdot w\not \models \varphi \iff \forall u\in {\varSigma }^+.\forall w\in {\varSigma }^\omega .\ w\not \models P(\varphi ,u)\). Consequently, when \(P(\varphi ,u)=\top \), we have \(\forall u\in {\varSigma }^+.\forall w\in {\varSigma }^\omega .\ u\cdot w\models \varphi \), i.e., \(u\in {{\mathrm{good}}}(\varphi )\). Also, when \(P(\varphi ,u)=\bot \), we have \(\forall u\in {\varSigma }^+.\forall w\in {\varSigma }^\omega .\ u\cdot w\not \models \varphi \), i.e., \(u\in {{\mathrm{bad}}}(\varphi )\). \(\square \)

1.2 (B) Proofs for Sect. 6

1.2.1 Proof of Lemma 4 (p. 15)

Consider \(\mathcal{M}=\{M_1,\ldots ,M_n\}\) where each monitor \(M_i\) has a set of local atomic propositions \({{{AP}_{i}}}={\varPi }_{i}({{{AP}_{}}})\) and observes the set of events \({\varSigma }_i\), we shall prove that:

$$\begin{aligned} \forall i\in [1,n]. \forall \sigma \in {\varSigma }_i.\forall \varphi \in \mathrm {LTL}\,. \forall \overline{\mathbf {X}}^d p\in {{\mathrm{sus}}}\big (P(\varphi ,\sigma ,{{{AP}_{i}}})\big ).\ d>1 \implies \overline{\mathbf {X}}^{d-1}p\in {{\mathrm{sus}}}(\varphi ) \end{aligned}$$

Let us consider \(\sigma \in {\varSigma },{\varSigma }_i\subseteq {\varSigma }\). The proof is done by structural induction on \(\varphi \in \mathrm {LTL}\,\).

Base Case: \(\varphi \in \{\top ,\bot ,p'\}\) for some \(p'\in {{{AP}_{}}}\).

  • Case \(\varphi =\top \). In this case, the proof is trivial since \(P(\top ,\sigma ,{{{AP}_{i}}})=\top \) and \({{\mathrm{sus}}}(\top )=\emptyset \).

  • Case \(\varphi =\bot \). This case is similar to the previous one.

  • Case \(\varphi =p'\in {{{AP}_{}}}\). If \(p'\in {{{AP}_{i}}}\), then \(P(p',\sigma ,{{{AP}_{i}}})\in \{\top ,\bot \}\) and \({{\mathrm{sus}}}(P(p',\sigma ,\) \({{{AP}_{i}}}))=\emptyset \). Else (\(p'\notin {{{AP}_{i}}}\)), \(P(p',\sigma ,{{{AP}_{i}}})=\overline{\mathbf {X}}p'\) and \({{\mathrm{sus}}}\big (P(p',\sigma ,{{{AP}_{i}}})\big )=\emptyset \).

Induction Case: \(\varphi \in \{\lnot \varphi ', \varphi _1\vee \varphi _2,\varphi _1\wedge \varphi _2, \overline{\mathbf {X}}^{d'}p',\mathbf {G}\varphi ',\mathbf {F}\varphi ',\mathbf {X}\varphi ', \varphi _1\mathbf {U}\varphi _2\}\). Our induction hypothesis states that the result holds for some formulae \(\varphi ',\varphi _1,\varphi _2\in \mathrm {LTL}\,\).

  • Case \(\varphi =\lnot \varphi '\). On the one hand, we have

    $$\begin{aligned} \begin{array}{ll} {{\mathrm{sus}}}\big (P(\lnot \varphi ',\sigma ,{{{AP}_{i}}})\big )&{}={{\mathrm{sus}}}\big (\lnot P(\varphi ',\sigma ,{{{AP}_{i}}})\big )\\ &{}={{\mathrm{sus}}}\big (P(\varphi ',\sigma ,{{{AP}_{i}}})\big ). \end{array} \end{aligned}$$

    On the other hand, we have \({{\mathrm{sus}}}(\lnot \varphi ')={{\mathrm{sus}}}(\varphi ')\). Thus, by applying directly the induction hypothesis on \(\varphi '\), we obtain the expected result.

  • Case \(\varphi =\varphi _1\vee \varphi _2\). On the one hand, we have

    $$\begin{aligned} \begin{array}{ll} {{\mathrm{sus}}}\big (P(\varphi _1\vee \varphi _2,\sigma ,{{{AP}_{i}}})\big ) &{}= {{\mathrm{sus}}}\big (P(\varphi _1,\sigma ,{{{AP}_{i}}})\vee P(\varphi _2,\sigma ,{{{AP}_{i}}})\big )\\ &{}={{\mathrm{sus}}}\big (P(\varphi _1,\sigma ,{{{AP}_{i}}})\big )\cup {{\mathrm{sus}}}\big (P(\varphi _2,\sigma ,{\varSigma }_i)\big ). \end{array} \end{aligned}$$

    Thus, \(\overline{\mathbf {X}}^d\in {{\mathrm{sus}}}\big (P(\varphi _1\wedge \varphi _2,\sigma ,{{{AP}_{i}}})\big )\) implies that \(\overline{\mathbf {X}}^dp\in {{\mathrm{sus}}}\big (P(\varphi _1,\sigma ,{{{AP}_{i}}})\big )\) or \(\overline{\mathbf {X}}^dp\in {{\mathrm{sus}}}\big (P(\varphi _2,\sigma ,{{{AP}_{i}}})\big )\). On the other hand, \({{\mathrm{sus}}}(\varphi _1\wedge \varphi _2)={{\mathrm{sus}}}(\varphi _1)\cup {{\mathrm{sus}}}(\varphi _2)\). Hence, the result can be obtained by applying the induction hypothesis on either \(\varphi _1\) or \(\varphi _2\) depending on whether \(\overline{\mathbf {X}}^dp\in {{\mathrm{sus}}}\big (P(\varphi _1,\sigma ,{{{AP}_{i}}})\big )\) or \(\overline{\mathbf {X}}^dp\in {{\mathrm{sus}}}\big (P(\varphi _2,\sigma ,{{{AP}_{i}}})\big )\).

  • Case \(\varphi =\overline{\mathbf {X}}^{d'}p'\) for some \(d'\in \mathbb {N}\) and \(p'\in {{{AP}_{}}}\). One one hand, if \(p'\in {{{AP}_{i}}}\), then it implies that \(P(\overline{\mathbf {X}}^{d'}p',\sigma ,{{{AP}_{i}}})\in \{\top ,\bot \}\). Else (\(p'\notin {{{AP}_{i}}}\)), we have \(P(\overline{\mathbf {X}}^{d'}p',\sigma ,\) \({{{AP}_{i}}})=\overline{\mathbf {X}}^{d'+1}p'\). On the other hand, we have \({{\mathrm{sus}}}(\overline{\mathbf {X}}^{d'}p')=\{\overline{\mathbf {X}}^{d'}p'\}\).

  • Case \(\varphi =\mathbf {G}\varphi '\). By definition of the progression rule for \(\mathbf {G}\) and the definition of \({{\mathrm{sus}}}\), we have

    $$\begin{aligned} \begin{array}{ll} {{\mathrm{sus}}}\big (P(\mathbf {G}\varphi ',\sigma ,{{{AP}_{i}}})\big ) &{} ={{\mathrm{sus}}}\big (P(\varphi ',\sigma ,{{{AP}_{i}}})\wedge \mathbf {G}\varphi '\big )\\ &{} ={{\mathrm{sus}}}\big (P(\varphi ',\sigma ,{{{AP}_{i}}})\big ). \end{array} \end{aligned}$$

    Since \(\varphi '\) is behind a future temporal operator, the only case where \({{\mathrm{sus}}}\big (P(\varphi ',\sigma ,\) \({{{AP}_{i}}})\big )\ne \emptyset \) is when \(\varphi '\) is a state-formula. In that case, we have \(\overline{\mathbf {X}}^dp\in {{\mathrm{sus}}}\big (P(\varphi ',\sigma ,\) \({{{AP}_{i}}})\big )\) implies that \(d=1\).

  • Cases \(\varphi \in \{\mathbf {F}\varphi ',\mathbf {X}\varphi ',\varphi _1\mathbf {U}\varphi _2\}\). These cases are similar to the previous one.

1.2.2 Proof of Theorem 2 (p. 12)

We have to prove that for any \(\overline{\mathbf {X}}^m p \in \mathrm {LTL}\,\), a local obligation of some monitor \(M_i \in \mathcal {M}\), \(m\le \min (|\mathcal {M}|,t+1)\) at any time \(t \in \mathbb {N}^{\ge 0}\). We will suppose that there are at least two components in the system (otherwise, the proof is trivial), i.e., \(|\mathcal{M}|\ge 2\). The proof is done by distinguishing three cases according to the value of \(t\in \mathbb {N}^{\ge 0}\).

First case: \(t=0\). In this case, we shall prove that \(m\le 1\). The proof is done by a structural induction on \(\varphi \in \mathrm {LTL}\,\). Recall that for this case, where \(t=0\), we have \(\forall i\in [1,|\mathcal{M}|].\ {{\mathrm{lo}}}(i,0,\varphi )= P(\varphi ,u_i(0),{{{AP}_{i}}})\).

Base case: \(\varphi \in \{\top ,\bot ,p\in {{{AP}_{}}}\}\).

  • Case \(\varphi =\top \). In this case we have \(\forall i\in [1,|\mathcal{M}|].\ {{\mathrm{lo}}}(i,0,\top )=P(\top ,u_i(0),{{{AP}_{i}}})=\top \). Moreover, \({{\mathrm{sus}}}(\top )=\emptyset \).

  • Case \(\varphi =\bot \). This case is symmetrical to the previous one.

  • Case \(\varphi =p\in {{{AP}_{}}}\). We distinguish two cases: \(p\in {{{AP}_{i}}}\) and \(p\notin {{{AP}_{i}}}\). If \(p\in {{{AP}_{i}}}\), then \({{\mathrm{lo}}}(i,0,p)\in \{\top ,\bot \}\) and \({{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,p)\big )=\emptyset \). Else (\(p\notin {{{AP}_{i}}}\)), we have \({{\mathrm{lo}}}(i,0,p)=\overline{\mathbf {X}}p\), and \({{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,p)\big )=\{\overline{\mathbf {X}}p\}=\{\overline{\mathbf {X}}^1 p\}\).

Structural induction case: \(\varphi \in \{\lnot \varphi ',\varphi _1\vee \varphi _2,\varphi _1\wedge \varphi _2,\mathbf {G}\varphi ', \mathbf {F}\varphi ',\mathbf {X}\varphi ',\varphi _1\mathbf {U}\varphi _2\}\). Our induction hypothesis states that the result holds for some formulae \(\varphi ',\varphi _1,\varphi _2\in \mathrm {LTL}\,\).

  • Case \(\varphi =\varphi _1\vee \varphi _2\). We have:

    $$\begin{aligned} \begin{array}{ll} {{\mathrm{lo}}}(i,0,\varphi _1\vee \varphi _2) \\ \quad = P(\varphi _1\vee \varphi _2,u_i(0),{{{AP}_{i}}})&{} \quad \text {(}{{\mathrm{lo}}}\text { definition for } t=0\text {)} \\ \quad = P(\varphi _1,u_i(0),{{{AP}_{i}}}) \vee P(\varphi _2,u_i(0),{{{AP}_{i}}}) &{} \quad \text {(progression on events)}\\ \quad = {{\mathrm{lo}}}(i,0,\varphi _1) \vee {{\mathrm{lo}}}(i,0,\varphi _2) &{} \quad \text {(} {{\mathrm{lo}}}\text { definition for } t=0\text {)}\\ {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\varphi _1\vee \varphi _2)\big ) \\ \quad ={{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi _1) \vee {{\mathrm{lo}}}(i,0,\varphi _2)\big ) &{}\\ \quad = {{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi _1)\big )\cup {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\varphi _2)\big ) &{} \quad \text {(} {{\mathrm{sus}}}\text { definition)}\\ \end{array} \end{aligned}$$

    We can apply the induction hypothesis on \(\varphi _1\) and \(\varphi _2\) to obtain successively:

    $$\begin{aligned} \begin{array}{l} \forall t\ge \mathbb {N}^{\ge 0}.\forall \varphi \in \mathrm {LTL}\,.\forall \overline{\mathbf {X}}^m p\in {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,t,\varphi _1)\big ).\ m\le 1\\ \forall t\ge \mathbb {N}^{\ge 0}.\forall \varphi \in \mathrm {LTL}\,.\forall \overline{\mathbf {X}}^m p\in {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,t,\varphi _2)\big ).\ m\le 1 \\ \forall t\ge \mathbb {N}^{\ge 0}.\forall \varphi \in \mathrm {LTL}\,.\forall \overline{\mathbf {X}}^m p\in {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,t,\varphi _1)\big )\cup {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,t,\varphi _2)\big ).\ m\le 1 \\ \end{array} \end{aligned}$$
  • Case \(\varphi =\lnot \varphi '\). We have:

    $$\begin{aligned} \begin{array}{rll} {{\mathrm{lo}}}(i,0,\lnot \varphi ') &{} = P(\lnot \varphi ',u_i(0),{{{AP}_{i}}}) &{} \quad \text {(} {{\mathrm{lo}}}\text { definition)}\\ &{} = \lnot P(\varphi ',u_i(0),{{{AP}_{i}}}) &{} \quad \text {(progression on events)}\\ {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\lnot \varphi ')\big ) &{} ={{\mathrm{sus}}}\big (\lnot P(\varphi ',u_i(0),{{{AP}_{i}}})\big ) \\ &{} ={{\mathrm{sus}}}\big (P(\varphi ',u_i(0),{{{AP}_{i}}})\big )&{} \quad \text {(} {{\mathrm{sus}}}\text { definition)} \\ &{} = {{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi ')\big )\\ \end{array} \end{aligned}$$
  • Case \(\varphi =\mathbf {X}\varphi '\). We have:

    $$\begin{aligned} \begin{array}{rll} {{\mathrm{lo}}}(i,0,\mathbf {X}\varphi ') &{} = P(\mathbf {X}\varphi ',u_i(0),{{{AP}_{i}}}) &{} \quad \text {(} {{\mathrm{lo}}}\text { definition)} \\ &{} = \varphi ' &{} \quad \text {(progression on events)} \\ {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\mathbf {X}\varphi ')\big ) &{} ={{\mathrm{sus}}}(\varphi ') \\ \end{array} \end{aligned}$$

    Since \(\varphi '\) is behind a future temporal operator, we have \({{\mathrm{sus}}}(\varphi ')=\emptyset \).

  • Case \(\varphi =\mathbf {G}\varphi '\). We have:

    $$\begin{aligned} \begin{array}{rll} {{\mathrm{lo}}}(i,0,\mathbf {G}\varphi ') &{} = P(\mathbf {G}\varphi ',u_i(0),{{{AP}_{i}}}) &{} \quad \text {(} {{\mathrm{lo}}}\text { definition)}\\ &{} = P(\varphi ',u_i(0),{{{AP}_{i}}}) \wedge \mathbf {G}\varphi ' &{} \quad \text {(progression on events)}\\ &{} = {{\mathrm{lo}}}(i,0,\varphi ') \wedge \mathbf {G}\varphi ' &{} \quad \text {(} {{\mathrm{lo}}}\text { definition for } \varphi '\text {)} \\ {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\mathbf {G}\varphi ')\big ) &{} ={{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\varphi ') \wedge \mathbf {G}\varphi '\big ) \\ &{} = {{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi ')\big )\cup {{\mathrm{sus}}}(\mathbf {G}\varphi ') &{} \quad \text {(} {{\mathrm{sus}}}\text { definition)}\\ &{} = {{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi ')\big )&{} \quad \text {(} {{\mathrm{sus}}}(\mathbf {G}\varphi ')=\emptyset \text {)} \\ \end{array} \end{aligned}$$
  • Case \(\varphi =\mathbf {F}\varphi '\). This case is similar to the previous one.

  • Case \(\varphi =\varphi _1\mathbf {U}\varphi _2\). We have:

    $$\begin{aligned} \begin{array}{l} {{\mathrm{lo}}}(i,0,\varphi _1\mathbf {U}\varphi _2)\\ \text {(} {{\mathrm{lo}}}\text { definition)}\\ \quad = P(\varphi _1\mathbf {U}\varphi _2,u_i(0),{{{AP}_{i}}})\\ \text {(progression on events)}\\ \quad = P(\varphi _2,u_i(0),{{{AP}_{i}}}) \vee \big (P(\varphi _1,u_i(0),{{{AP}_{i}}})\wedge \varphi _1\mathbf {U}\varphi _2\big )\\ \text {(}{{\mathrm{lo}}}\text { definition for } \varphi _1 \text { and }\varphi _2\text {)}\\ \quad = {{\mathrm{lo}}}(i,0,\varphi _2) \vee {{\mathrm{lo}}}(i,0,\varphi _1) \wedge \varphi _1\mathbf {U}\varphi _2 \end{array} \\ \begin{array}{l} {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\varphi _1\mathbf {U}\varphi _2)\big ) \\ \quad ={{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi _1) \vee {{\mathrm{lo}}}(i,0,\varphi _2) \wedge \varphi _1\mathbf {U}\varphi _2\big ) \\ \text {(}{{\mathrm{sus}}}\text { definition)}\\ \quad = {{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi _2)\big )\cup {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\varphi _1)\big )\cup {{\mathrm{sus}}}(\varphi _1\mathbf {U}\varphi _2)\\ \text {(}{{\mathrm{sus}}}(\varphi _1\mathbf {U}\varphi _2=\emptyset \text {)}\\ \quad = {{\mathrm{sus}}}\big ( {{\mathrm{lo}}}(i,0,\varphi _2)\big )\cup {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(i,0,\varphi _1)\big )\\ \end{array} \end{aligned}$$

For \(t\ge 1\), the proof is done by reductio ad absurdum. Let us consider some \(t\in \mathbb {N}\) and suppose that the theorem does not hold at time t. It means that:

$$\begin{aligned} \exists \varphi \in \mathrm {LTL}\,.\exists i\in [1,|\mathcal{M}|]. \exists \overline{\mathbf {X}}^d p\in {{\mathrm{ulo}}}(i,t,\varphi ).\ d> \min (|\mathcal{M}|, t+1). \end{aligned}$$

According to Lemma 3, since \({{\mathrm{ulo}}}(i,t,\varphi )=\bigcup _{j=1,j\ne i}^{|\mathcal M|} {{\mathrm{sus}}}\big (P({{\mathrm{received}}}(i,t),u_i(t))\big )\), it means that \(\exists j_1\in [1,|\mathcal{M}|]{\setminus }\{i\}. \overline{\mathbf {X}}^d p\in {{\mathrm{sus}}}\big (P({{\mathrm{received}}}(i,t,j_1),u_i(t),{{{AP}_{i}}})\big )\). Using Lemma 4, we have \(\overline{\mathbf {X}}^{d-1} p\in {{\mathrm{sus}}}({{\mathrm{received}}}(i,t,j_1))\). It implies that \({{\mathrm{send}}}(j_1,t-1,i)=\mathrm {true}\) and \(M_i={{\mathrm{Mon}}}\big (M_{j_1},{{\mathrm{Prop}}}({{\mathrm{ulo}}}(j_1,t-1,\varphi ))\big )\). We deduce that \(i=\min \big \{j\in [1,|\mathcal{M}|]{\setminus }\{j_1\}\mid \exists p\in {{\mathrm{Prop}}}({{\mathrm{ulo}}}(j,t-1,\varphi )).\ p\in {{{AP}_{i}}}\big \}\). Moreover, from \(\overline{\mathbf {X}}^{d}p\in {{\mathrm{ulo}}}(i,t,\varphi )\), we find \(p\notin {{{AP}_{{i'}}}}\), with \(i<i'\).

We can apply the same reasoning on \(\overline{\mathbf {X}}^{d-1}p\) to find that \(i<j_1<i'\) and \(p\notin {\varPi }_{j_1}({{{AP}_{}}})\). Following the same reasoning and using Lemma 4, we can find a set of indexes \(\{j_1,\ldots ,j_d\}\) s.t.

$$\begin{aligned} \begin{array}{ll} &{}\{j_1,\ldots ,j_d\}\supseteq [1,|\mathcal{M}|]\\ &{} \quad \wedge \forall j\in \{j_1,\ldots ,j_d\}.\ p\notin {{{AP}_{j}}}\wedge j\in [1,|\mathcal{M}|] \end{array} \end{aligned}$$

Moreover, due to the ordering between components, we know that \(\forall k_1,k_2 \in [1,d].\ k_1 < k_2 \implies j_{k_1} < j_{k_2}\).

Case \(0<t< |\mathcal{M}|\). In this case we have \(d >t+1\), and thus, we have \(\overline{\mathbf {X}}^{d'} p\in {{\mathrm{sus}}}\big ({{\mathrm{lo}}}(j_t,0,\varphi )\big )\) with \(d'>1\) which is a contradiction with the result shown for \(t=0\).

Case \(t\ge |\mathcal{M}|\). In this case, \(\forall k_1,k_2 \in [1,d].\ k_1 < k_2 \implies j_{k_1} < j_{k_2}\) implies that \(\forall j_{k_1},j_{k_2}\in \{j_1,\ldots ,j_d\}.\ k_1\ne k_2\implies j_{k_1}\ne j_{k_2}\). Hence, we have \(p\notin \bigcup _{j=j_1}^{j_d} {{{AP}_{j}}} \supseteq {{{AP}_{}}}\). This is impossible.

1.2.3 Intermediate lemmas and notation

Let us first define some notations. Consider \(\varphi \in \mathrm {LTL}\,, u\in {\varSigma }^+, i\in [1,|\mathcal{M}|]\):

  • \({{\mathrm{rp}}}(\varphi ,u)\) is the formula \(\varphi \) where past sub-formulae are removed and replaced by their evaluations using trace u. Formally:

    $$\begin{aligned} \begin{array}{ll} {{\mathrm{rp}}}(\varphi ,u) &{}= \mathtt {match}\; \varphi \; \mathtt {with}\\ &{} \begin{array}{ll} \mid \overline{\mathbf {X}}^d p &{}\implies \left\{ \begin{array}{ll} \top &{} \quad \text { if } \quad p \in u(|u|-d)\\ \bot &{} \quad \text { otherwise }\\ \end{array} \right. \\ \mid \varphi _1 \wedge \varphi _2&{} \implies {{\mathrm{rp}}}(\varphi _1,u) \wedge {{\mathrm{rp}}}(\varphi _2,u)\\ \mid \varphi _1 \vee \varphi _2&{} \implies {{\mathrm{rp}}}(\varphi _1,u) \vee {{\mathrm{rp}}}(\varphi _2,u)\\ \mid \lnot \varphi ' &{} \implies \lnot {{\mathrm{rp}}}(\varphi ',u)\\ \mid \_ &{} \implies \varphi \end{array} \end{array} \end{aligned}$$
  • \({{\mathrm{rp}}}(\varphi ,u,i)\) is the formula \(\varphi \) where past sub-formulae are removed (if possible) and replaced by their evaluations using only the sub-trace \(u_i\) of u.

    $$\begin{aligned} \begin{array}{ll} {{\mathrm{rp}}}(\varphi ,u,i) &{}= \mathtt {match}\; \varphi \; \mathtt {with}\\ &{} \begin{array}{ll} \mid \overline{\mathbf {X}}^d p &{} \implies \left\{ \begin{array}{ll} \top &{} \quad \text { if } \quad p \in u(|u|-d)\\ \bot &{} \quad \text { if }\quad p \notin u(|u|-d) \text { and } p\in {{{AP}_{i}}} \\ \overline{\mathbf {X}}^d p &{} \quad \text { otherwise } \end{array} \right. \\ \mid \varphi _1 \wedge \varphi _2&{} \implies {{\mathrm{rp}}}(\varphi _1,u,i) \wedge {{\mathrm{rp}}}(\varphi _2,u,i)\\ \mid \varphi _1 \vee \varphi _2&{} \implies {{\mathrm{rp}}}(\varphi _1,u,i) \vee {{\mathrm{rp}}}(\varphi _2,u,i)\\ \mid \lnot \varphi ' &{} \implies \lnot {{\mathrm{rp}}}(\varphi ',u,i)\\ \mid \_ &{} \implies \varphi \end{array} \end{array} \end{aligned}$$

The following lemma exhibits some straightforward properties of function \({{\mathrm{rp}}}\).

Lemma 7

Let \(\varphi \) be an \(\mathrm {LTL}\)  formula, \(u\in {\varSigma }^+\) be a trace of length \(t+1\), \(i\in [1,|\mathcal{M}|]\) a monitor of one of the components, \(u_i(t)\in {\varSigma }_i\) the last event of u on component i, we have:

  1. 1.

    \({{\mathrm{rp}}}\big (P(\varphi ,\sigma _i,{{{AP}_{i}}}), u\big ) = {{\mathrm{rp}}}\big (P({{\mathrm{rp}}}(\varphi ,u(0)\cdots u(t-1)),\sigma _i,{{{AP}_{i}}}),u\big )\);

  2. 2.

    \({{\mathrm{rp}}}\big (P(\varphi ,\sigma _i,{{{AP}_{i}}}),u\big ) = P(\varphi ,u(t),{{{AP}_{}}})\);

  3. 3.

    \(P(\varphi ,u_i(t),{{{AP}_{i}}}) = P\big ({{\mathrm{rp}}}(\varphi ,u(0)\cdots u(t-1),i), u_i(t),{{{AP}_{i}}}\big )\);

  4. 4.

    \(\bigcup _{\varphi '\in {{\mathrm{sus}}}(\varphi )} {{\mathrm{Prop}}}(\varphi ') \subseteq {{{AP}_{i}}} \implies {{\mathrm{rp}}}(\varphi ,u,i) = {{\mathrm{rp}}}(\varphi ,u)\);

  5. 5.

    For \(\{i_1,\ldots ,i_n\}= [1,|\mathcal{M}|]\): \({{\mathrm{rp}}}({{\mathrm{rp}}}(\ldots {{\mathrm{rp}}}(\varphi ,u,i_1),\ldots ),u,i_n) = {{\mathrm{rp}}}(\varphi ,u)\).

Proof

The proofs of these properties can be done by induction on \(\varphi \in \mathrm {LTL}\,\) and follow directly from the definitions of \({{\mathrm{rp}}}\) and the progression function. \(\square \)

Lemma 8

Given a trace and a local obligation on a monitor obtained by running the decentralised algorithm from an initial obligation on the trace, the local obligation where past sub-formulae have been evaluated using the trace is equal to the initial obligation progressed with this same trace. Formally:

$$\begin{aligned} \begin{array}{l} \forall u\in {\varSigma }^+. \forall i\in [1,|\mathcal{M}|].\forall t\in \mathbb {N}^*.\\ \qquad \qquad \qquad |u|=t+1\wedge {{\mathrm{lo}}}(i,t,\varphi )\ne \# \implies {{\mathrm{rp}}}({{\mathrm{lo}}}(i,t,\varphi ),u) = P(\varphi ,u). \end{array} \end{aligned}$$

Proof

We shall prove this lemma by induction on \(t\in \mathbb {N}^*\). Let us consider some component \(M_i\) where \(i\in [1,|\mathcal{M}|]\).

  • For \(t=0\). In this case, \(|u|=1\) and we have \({{\mathrm{rp}}}({{\mathrm{lo}}}(i,0,\varphi ),u) = {{\mathrm{rp}}}\big (P(\varphi ,\sigma _i,{{{AP}_{i}}})\big )\) where \(\sigma _i = {\varPi }(u(0))\). We can obtain the expected result by doing an induction on \(\varphi \in \mathrm {LTL}\,\) where the only interesting case is \(\varphi =p\in {{{AP}_{}}}\). According to the definition of the progression function, we have:

    $$\begin{aligned} \begin{array}{lcl} P(p, \sigma _i,{{{{AP}_{i}}}}) &{} = &{} \left\{ \begin{array}{ll} \top &{} \quad \text{ if } \quad p \in \sigma _i, \\ \bot &{} \quad \text{ if } \quad p \notin \sigma _i \wedge p \in {{{AP}_{i}}}, \\ \overline{\mathbf {X}}p &{} \quad \text{ otherwise }, \end{array} \right. \end{array} \end{aligned}$$

    Moreover, \(p\in \sigma _i\) implies \(p\in u(0)\) and \(p\notin \sigma _i\) with \(p\in {{{AP}_{i}}}\) implies \(\forall j\in [1,|\mathcal{M}|].\ p\notin {\varPi }_j(u(0))\), i.e., \(p\notin u(0)\).

    On the one hand, according to the definition of \({{\mathrm{rp}}}\), we have:

    $$\begin{aligned} \begin{array}{lcl} {{\mathrm{rp}}}(\overline{\mathbf {X}}p, u(0)) &{} = &{} \left\{ \begin{array}{ll} \top &{} \quad \text{ if } \; p \in u(0), \\ \bot &{} \quad \text{ if } \; p \notin u(0). \\ \end{array} \right. \end{array} \end{aligned}$$

    Thus, we have:

    $$\begin{aligned} \begin{array}{lcl} {{\mathrm{rp}}}\big (P(p, \sigma _i,{{{{AP}_{i}}}})\big ) &{} = &{} \left\{ \begin{array}{ll} \top &{} \quad \text{ if } \; p \in u(0), \\ \bot &{} \quad \text{ if } \; p \notin u(0). \\ \end{array} \right. \end{array} \end{aligned}$$

    On the other hand, according to the definition of the progression function, we have:

    $$\begin{aligned} \begin{array}{lcl} P(\varphi , u(0)) &{} = &{} \left\{ \begin{array}{ll} \top &{} \quad \text{ if } \; p \in u(0), \\ \bot &{} \quad \text{ if } \; p \notin u(0). \\ \end{array} \right. \end{array} \end{aligned}$$
  • Let us consider some \(t\in \mathbb {N}^*\) and suppose that the property holds. We have:

    $$\begin{aligned} {{\mathrm{lo}}}(i,t+1,\varphi )= P\big ({{\mathrm{kept}}}(i,t)\wedge {{\mathrm{received}}}(i,t),u_i(t+1),{{{AP}_{i}}}\big ). \end{aligned}$$

    Similarly to the proof of Lemma 10, let us distinguish four cases according to the communication that occurred at the end of time t.

    • If \({{\mathrm{send}}}(i,t)=\mathrm {false}\) and \(\forall j\in [1,|\mathcal{M}|]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {false}\). Then, we have:

      $$\begin{aligned} {{\mathrm{lo}}}(i,t+1,\varphi ) = P\left( {{\mathrm{lo}}}\left( i,t,\varphi \right) ,u_i\left( t+1\right) ,{{{AP}_{i}}}\right) \end{aligned}$$

      Let us now compute \({{\mathrm{rp}}}({{\mathrm{lo}}}(i,t+1,\varphi ),u(0)\cdots u(t+1))\):

      $$\begin{aligned} \begin{array}{l} {{\mathrm{rp}}}({{\mathrm{lo}}}(i,t+1,\varphi ),u(0)\cdots u(t+1)) \\ \quad \quad = {{\mathrm{rp}}}(P({{\mathrm{lo}}}(i,t,\varphi ), u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1))\\ \quad \text {(Lemma 7, item 1)}\\ \quad \quad = {{\mathrm{rp}}}(P({{\mathrm{rp}}}({{\mathrm{lo}}}(i,t,\varphi ),u(0)\cdots u(t)), u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1)) \\ \quad \text {(induction hypothesis)}\\ \quad \quad = {{\mathrm{rp}}}(P(P(\varphi ,u(0)\cdots u(t)),u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1))\\ \quad \text {(Lemma 7, item 2)}\\ \quad \quad = P(P(\varphi ,u(0)\cdots u(t)),u(t+1),{{{AP}_{}}})\\ \quad \text {(}P(\varphi ,u(0)\cdots u(t))\text { is a future formula)}\\ \quad \quad = P(\varphi ,u(0)\cdots u(t+1)) \end{array} \end{aligned}$$
    • If \({{\mathrm{send}}}(i,t)=\mathrm {true}\) and \(\exists j\in [1,|\mathcal{M}|]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). Then, we have:

      $$\begin{aligned} {{\mathrm{lo}}}(i(i,t+1,\varphi ) = P\left( \bigwedge _{j\in J}{{\mathrm{lo}}}\left( j,t\varphi \right) ,u_i\left( t+1\right) ,{{{AP}_{i}}}\right) , \end{aligned}$$

      s.t. \(\forall j\in J.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). Then:

      $$\begin{aligned} \begin{array}{l} {{\mathrm{rp}}}({{\mathrm{lo}}}(i,t+1,\varphi ),u(0)\cdots u(t+1)) \\ \quad \quad = {{\mathrm{rp}}}(P({\bigwedge }_{j\in J}{{\mathrm{lo}}}(j,t\varphi ),u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1)) \\ \quad \text {(definition of the progression function)} \\ \quad \quad = {{\mathrm{rp}}}({\bigwedge }_{j\in J} P({{\mathrm{lo}}}(j,t\varphi ),u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1)) \\ \quad \text {(definition of } {{\mathrm{rp}}}\text {)}\\ \quad \quad = {\bigwedge }_{j\in J} {{\mathrm{rp}}}( P({{\mathrm{lo}}}(j,t\varphi ),u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1))\\ \quad \text {(Lemma 7, item 1)} \\ \quad \quad = {\bigwedge }_{j\in J} {{\mathrm{rp}}}( P({{\mathrm{rp}}}({{\mathrm{lo}}}(j,t\varphi ),u(0)\cdots u(t)),u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1)) \\ \quad \text {(induction hypothesis)}\\ \quad \quad = {\bigwedge }_{j\in J} {{\mathrm{rp}}}( P(P(\varphi ,u(0)\cdots u(t)),u_i(t+1),{{{AP}_{i}}}),u(0)\cdots u(t+1)) \\ \quad \text {(Lemma 7, item 2)}\\ \quad \quad = {\bigwedge }_{j\in J} {{\mathrm{rp}}}( P(\varphi ,u(0)\cdots u(t)\cdot u(t+1) ) ) \\ \quad \text {(}P(\varphi ,u(0)\cdots u(t+1)) \text { is a future formula)} \\ \quad \quad = {\bigwedge }_{j\in J} P(\varphi ,u(0)\cdots u(t+1)) \\ \quad \quad = P(\varphi ,u(0)\cdots u(t+1)) \end{array} \end{aligned}$$
    • If \({{\mathrm{send}}}(i,t)=\mathrm {false}\) and \(\exists j\in [1,|\mathcal{M}|]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). Then, we have:

      $$\begin{aligned} \begin{array}{l} {{\mathrm{lo}}}(i,t+1,\varphi ) = P\big ({{\mathrm{lo}}}(i,t,\varphi )\wedge {\bigwedge }_{i\in J} {{\mathrm{lo}}}(j,t,\varphi ), u_i(t+1),{{{AP}_{i}}}\big )\\ \quad = P\big ({{\mathrm{lo}}}(i,t,\varphi ), u_i(t+1),{{{AP}_{i}}}\big ) \wedge P\big ({\bigwedge }_{i\in J} {{\mathrm{lo}}}(j,t,\varphi ), u_i(t+1),{{{AP}_{i}}}\big ) \end{array} \end{aligned}$$

      where \(\forall j\in J.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). The proof this case is just a combination of the proofs of the two previous cases.

    • If \({{\mathrm{send}}}(i,t)=\mathrm {true}\) and \(\forall j\in [1,|\mathcal{M}|]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {false}\). Then, we have: \({{\mathrm{lo}}}(i,t+1,\varphi ) =\#\). The result holds vacuously.

\(\square \)

1.2.4 Proof of Theorem 3 (p. 15)

The soundness of Algorithm L wrt. centralised progression is now a straightforward consequence of Lemma 8. Indeed, let us consider \(u\in {\varSigma }^*\) s.t. \(u\models _D\varphi =\top \). It implies that \(\exists i\in [1,n].\ {{\mathrm{lo}}}(i,t,\varphi )=\top = {{\mathrm{rp}}}({{\mathrm{lo}}}(i,t,\varphi ),u)\). Applying Lemma 8, we have \(P(\varphi , u) = \top \), i.e., \(u\models _C\varphi =\top \).

The proof for \(u\models _D\varphi =\bot \implies u\models _C\varphi =\bot \) is similar.

Alternative proof of the correctness of decentralised monitoring wrt. \(\mathrm {LTL}\,_{3}\)

Soundness of the decentralised monitoring algorithm wrt. \(\mathrm {LTL}\,_3\) semantics is a consequence of the soundness of the monitoring algorithm wrt. centralised progression (Theorem 3), as stated by Corollary 1, i.e., whenever the decentralised monitoring algorithm yields a verdict for a given trace, then the evaluation of this trace wrt. \(\mathrm {LTL}\,_3\) semantics is the same. For the sake of completeness, we provide an alternative and direct proof.

Some intermediate lemmas We introduce some intermediate lemmas.

The following lemma extends Lemma 1 to the decentralised case, i.e., it states that the progression function mimics \(\mathrm {LTL}\) semantics in the decentralised case.

Lemma 9

Let \(\varphi \) be an \(\mathrm {LTL}\)  formula, \(\sigma \in {\varSigma }\) an event, \(\sigma _i\) a local event observed by monitor \(M_i\), and w an infinite trace, we have \(\sigma \cdot w\models \varphi \iff (\sigma \cdot w)^1\models P(\varphi ,\sigma _i,{\varSigma }_i)\).

Proof

We shall prove that:

$$\begin{aligned} \begin{array}{l} \forall i\in [1,n].\ \forall \varphi \in \mathrm {LTL}\,.\forall \sigma \in {\varSigma }.\forall \sigma _i\in {\varSigma }_i.\forall w\in {\varSigma }^\omega .\\ \qquad \qquad \qquad \qquad \qquad \qquad \sigma \cdot w\models \varphi \iff (\sigma \cdot w)^1\models P(\varphi ,\sigma _i,{{{AP}_{i}}}). \end{array} \end{aligned}$$

The proof is done by induction on the formula \(\varphi \in \mathrm {LTL}\,\). Notice that when \(\varphi \) is not an atomic proposition, the lemma reduces to Lemma 1. Thus, we just need to treat the case \(\varphi =p\in {{{AP}_{}}}\).

If \(\varphi =p\in {{{AP}_{}}}\). We have \(\sigma \cdot w\models p\iff p\in \sigma \). Let us consider \(i\in [1,n]\), according to the definition of the progression function (1):

$$\begin{aligned} \begin{array}{lcl} P(p, \sigma _i,{{{AP}_{i}}}) &{} = &{} \left\{ \begin{array}{ll} \top &{} \quad \text{ if } \quad p \in \sigma _i, \\ \bot &{} \quad \text{ if } \quad p \notin \sigma _i \wedge p \in {{{AP}_{i}}}, \\ \overline{\mathbf {X}}p &{} \quad \text{ otherwise }. \end{array} \right. \end{array} \end{aligned}$$

Let us distinguish three cases.

  • Suppose \(p\in \sigma _i\). On the one hand, we have \(p\in \sigma \) and then \(\sigma \cdot w\models p\). On the other hand, we have \(P(p, \sigma _i,{{{AP}_{i}}})=\top \) and thus \(w\models P(p, \sigma _i,{{{AP}_{i}}})\).

  • Suppose \(p\notin \sigma _i\) and \(p\in {{{AP}_{i}}}\). One the one hand, we have \(p\in \sigma \), and, because \(p\in {{{AP}_{i}}}\) we have \(p\notin \sigma \); and thus \(\sigma \cdot w\not \models p\). On the other hand, we have \(P(p, \sigma _i,{{{AP}_{i}}})=\bot \).

  • Suppose \(p\notin \sigma _i\) and \(p\notin {{{AP}_{i}}}\), we have \((\sigma \cdot w)^1\models \overline{\mathbf {X}}p\iff \big ((\sigma \cdot w)^{-1}\big )^1\models \overline{\mathbf {X}}p\iff \sigma \cdot w\models p\).

\(\square \)

The following lemma states that “the satisfaction of an LTL formula” is propagated by the decentralised monitoring algorithm.

Lemma 10

$$\begin{aligned}&\forall t\in \mathbb {N}^{\ge 0}.\forall i\in [1,n].\forall \varphi \in \mathrm {LTL}\,.\forall w\in {\varSigma }^\omega .\ {{\mathrm{inlo}}}(i,t,\varphi )\ne \# \\&\quad \implies \left( w\models \varphi \iff w^t\models {{\mathrm{inlo}}}(i,t,\varphi )\right) . \end{aligned}$$

Proof

The proof is done by induction on \(t\in \mathbb {N}^{\ge 0}\).

  • For \(t=0\), the proof is trivial since \(\forall i\in [1,n].\forall \varphi \in \mathrm {LTL}\,.\ {{\mathrm{inlo}}}(i,0,\varphi )=\varphi \) and \(w^0=w\).

  • Let us consider some \(t\in \mathbb {N}^{\ge 0}\) and suppose that the lemma holds. Let us consider \(i\in [1,n]\), we have:

    $$\begin{aligned} {{\mathrm{inlo}}}(i,t+1,\varphi ) = {{\mathrm{kept}}}(i,t)\wedge {{\mathrm{received}}}(1,t+1). \end{aligned}$$

    Let us now distinguish four cases according to the communication performed by local monitors at the end of time t, i.e., according to \({{\mathrm{send}}}(i,t)\) and \({{\mathrm{send}}}(j,t,i)\), for \(j\in [1,n]{\setminus }\{i\}\).

    • If \({{\mathrm{send}}}(i,t)=\mathrm {false}\) and \(\exists j\in [1,n]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). Then, we have:

      $$\begin{aligned} {{\mathrm{inlo}}}(i,t+1,\varphi ) = P\big ( {{\mathrm{inlo}}}(i,t,\varphi ) \wedge \bigwedge _{j\in J} {{\mathrm{inlo}}}(j,t,\varphi ),u_i(t+1),{\varSigma }_i\big ). \end{aligned}$$

      where \(\forall j\in J.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). Applying the definition of the progression function, we have:

      $$\begin{aligned} \begin{array}{l} {{\mathrm{inlo}}}(i,t+1,\varphi ) \\ \quad = P\big ( {{\mathrm{inlo}}}(i,t,\varphi ),u_i(t+1),{\varSigma }_i\big ) \wedge {\bigwedge }_{j\in J} P\big ({{\mathrm{inlo}}}(j,t,\varphi ),u_i(t+1),{\varSigma }_i\big ). \end{array} \end{aligned}$$

      Now, we have:

      With:

      And similarly:

      $$\begin{aligned} {\forall j\in J}.\ w^{t+1}\models P\big ({{\mathrm{inlo}}}(j,t,\varphi ),u_i(t+1),{\varSigma }_i\big ) \iff w^t\models {{\mathrm{inlo}}}(j,t,\varphi ) \\ \end{aligned}$$

      It follows that:

      $$\begin{aligned} w^{t+1}\models {{\mathrm{inlo}}}(i,t+1,\varphi ) \iff w^t\models {{\mathrm{inlo}}}(i,t,\varphi ) \wedge \bigwedge _{j\in J}w^t\models {{\mathrm{inlo}}}(i,t,\varphi ). \end{aligned}$$

      And finally:

      $$\begin{aligned} w^{t+1}\models {{\mathrm{inlo}}}(i,t+1,\varphi ) \iff w^t\models {{\mathrm{inlo}}}(i,t,\varphi ). \end{aligned}$$
    • If \({{\mathrm{send}}}(i,t)=\mathrm {true}\) and \(\exists j\in [1,n]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). Then, we have:

      $$\begin{aligned} \begin{array}{rcl} {{\mathrm{inlo}}}(i,t+1,\varphi ) &{}=&{} P\big (\#\wedge {\bigwedge }_{j\in J} {{\mathrm{inlo}}}(j,t,\varphi ),u_i(t+1),{\varSigma }_i)\\ &{}=&{} P\big ({\bigwedge }_{j\in J} {{\mathrm{inlo}}}(j,t,\varphi ),u_i(t+1),{\varSigma }_i)\\ \end{array} \end{aligned}$$

      where \(\forall j\in J.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). The previous reasoning can be followed in the same manner to obtain the expected result.

    • If \({{\mathrm{send}}}(i,t)=\mathrm {false}\) and \(\forall j\in [1,n]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {false}\). Then, we have:

      $$\begin{aligned} {{\mathrm{inlo}}}(i,t+1,\varphi ) = P\big ({{\mathrm{inlo}}}(i,t,\varphi ) ,u_i(t+1),{\varSigma }_i). \end{aligned}$$

      The previous reasoning can be followed in the exact same manner to obtain the expected result.

    • If \({{\mathrm{send}}}(i,t)=\mathrm {true}\) and \(\forall j\in [1,n]{\setminus }\{i\}.\ {{\mathrm{send}}}(j,t,i)=\mathrm {true}\). Then, we have:

      $$\begin{aligned} {{\mathrm{inlo}}}(i,t+1,\varphi ) = P(\#,u_i(t+1),{\varSigma }_i) = \# \end{aligned}$$

      In this case, the result holds vacuously.

\(\square \)

Back to the proof of soundness The soundness of Algorithm L is now a straightforward consequence of Lemmas 9 and 10. Indeed, let us consider \(u\in {\varSigma }^*\) s.t. \(|u|=t\). We have \(u\models _D\varphi =\top \) implies that \(\exists i\in [1,n].\ {{\mathrm{lo}}}(i,t,\varphi )=\top \) and then \({{\mathrm{inlo}}}(i,t+1,\varphi ) = \top \). It implies that \(\forall w\in {\varSigma }^\omega .\ w\models {{\mathrm{inlo}}}(i,t+1,\varphi )\). Since \(|u|=t\), it follows that \(\forall w\in {\varSigma }^\omega .\ (u\cdot w)^t\models {{\mathrm{inlo}}}(i,t+1,\varphi )\). Applying Lemma 10, we have \(\forall w\in {\varSigma }^\omega .\ u\cdot w \models \varphi \), i.e., \(u\models _3\varphi =\top \).

The proof for \(u\models _D\varphi =\bot \implies u\models _3\varphi =\bot \) is similar.\(\square \)

1.2.5 Proof of Theorem 4 (p. 15)

The proof of Theorem 4 intuitively consists in showing that in a given architecture, we can take successively two components and merge them to obtain an equivalent architecture in the sense that they produce the same verdicts. The difference is that if in the merged architecture a verdict is emitted, then, in the non-merged architecture the same verdict will be produced with an additional delay.

Lemma 11

In a two-component architecture, if in the centralised case a verdict is produced for some trace u, then, in the decentralised case, one of the monitors will produce the same verdict, after any event. Formally:

$$\begin{aligned} \forall \varphi \in \mathrm {LTL}\,.\forall u\in {\varSigma }^+.\ P(\varphi ,u) = \top /\bot \implies \forall \sigma \in {\varSigma }.\exists i\in [1,2].\ {{\mathrm{lo}}}(i,|u\cdot \sigma |,\varphi ) = \top /\bot . \end{aligned}$$

Proof

Let us consider a formula \(\varphi \in \mathrm {LTL}\,\) and a trace \(u\in {\varSigma }^+\) s.t. \(|u|=t+1\). We shall only consider the case where \(P(\varphi ,u)=\top \). The other case is symmetrical. Let us suppose that \({{\mathrm{lo}}}(1,t,\varphi )\ne \top \) and \({{\mathrm{lo}}}(2,t,\varphi )\ne \top \) (otherwise the results holds immediately). Because of the correctness of the algorithm (Theorem 3), we know that \({{\mathrm{lo}}}(1,t,\varphi )\ne \bot \) and \({{\mathrm{lo}}}(2,t,1)\ne \bot \). Moreover, according to Lemma 8, necessarily \({{\mathrm{lo}}}(1,t,\varphi )\) and \({{\mathrm{lo}}}(2,t,\varphi )\) are urgent formulae: \({\varUpsilon }({{\mathrm{lo}}}(1,t,\varphi ))>0\) and \({\varUpsilon }({{\mathrm{lo}}}(2,t,\varphi ))>0\). Since, there are only two components in the considered architecture, we have \(\bigcup _{\varphi '\in {{\mathrm{sus}}}({{\mathrm{lo}}}(1,t,\varphi ))} {{\mathrm{Prop}}}(\varphi ')\subseteq {{{AP}_{2}}}\) and \(\bigcup _{\varphi '\in {{\mathrm{sus}}}({{\mathrm{lo}}}(2,t,\varphi ))}{{\mathrm{Prop}}}(\varphi ')\subseteq {{{AP}_{1}}}\). According to Algorithm L, we have then \({{\mathrm{send}}}(1,t,2) = \mathrm {true}\) and \({{\mathrm{send}}}(2,t,1)=\mathrm {true}\). Then \({{\mathrm{inlo}}}(1,t+1,\varphi ) = {{\mathrm{lo}}}(2,t,\varphi ) \wedge {{\mathrm{lo}}}(2,t,\varphi ) = \#\). Hence: \({{\mathrm{lo}}}(1,t+1,\varphi )= P({{\mathrm{lo}}}(2,t,\varphi ), u_1(t+1),{{{AP}_{1}}})\). According to Lemma 7 item 4, we have \({{\mathrm{lo}}}(1,t+1,\varphi )= P({{\mathrm{rp}}}({{\mathrm{lo}}}(2,t,\varphi ),u(0)\cdots u(t),1),u_1(t+1),{{{AP}_{1}}})\). Since

$$\begin{aligned} \bigcup _{\varphi '\in {{\mathrm{sus}}}({{\mathrm{lo}}}(2,t,\varphi ))}{{\mathrm{Prop}}}(\varphi ') \subseteq {{{AP}_{1}}}, \end{aligned}$$

we have \({{\mathrm{rp}}}\big ({{\mathrm{lo}}}(2,t,\varphi ),u(0)\cdots u(t),1\big ) = {{\mathrm{rp}}}\big ({{\mathrm{lo}}}(2,t,\varphi ),u(0)\cdots u(t)\big )\). It follows that:

$$\begin{aligned} \begin{array}{lll} {{\mathrm{lo}}}(1,t,\varphi )&{} = P({{\mathrm{rp}}}({{\mathrm{lo}}}(2,t-1,\varphi ),u(0)\cdots u(t)), u_1(t+1),{{{AP}_{1}}}) &{} \\ &{}= P( P(\varphi ,u(0)\cdots u(t)), u_1(t+1), {{{AP}_{1}}}) &{} \text {(Lemma 8)}\\ &{}= P(\top ,u_1(t+1),{{{AP}_{1}}}) = \top . \end{array} \end{aligned}$$

Symmetrically, we can find that \({{\mathrm{lo}}}(2,t,\varphi ) = \top \). \(\square \)

Given two components \(C_1\) and \(C_2\) with two monitors attached \(M_1\) and \(M_2\) observing respectively two partial traces \(u_1\) and \(u_2\) of some global trace u. The alphabets of \(C_1\) and \(C_2\) are \({\varSigma }_1\) and \({\varSigma }_2\) built over \({{{AP}_{1}}}\) and \({{{AP}_{2}}}\), respectively. Consider the architecture \(\mathcal{C}= \{C_1,C_2\}\) with the set of monitors \(\mathcal{M} = \{M_1,M_2\}\). Let us define the new component \({{\mathrm{merge}}}(C_1,C_2)\) that produces events in \(2^{{{{AP}_{1}}} \cup {{{AP}_{2}}}}\). To component \({{\mathrm{merge}}}(C_1,C_2)\) is attached a monitor M observing events in the same alphabet. Then, let us consider architecture \(\mathcal{C}'= \{{{\mathrm{merge}}}(C_1,C_2)\}\) which is a one-component architecture with the set of monitors \(\mathcal{M}' =\{{{\mathrm{merge}}}(M_1,M_2)\}\).

We can parameterise the satisfaction relation of \(\mathrm {LTL}\,\) formulae according to the considered architecture. The relation \(\models _D\) becomes \(\models _D^\mathcal{M}\) where \(\mathcal{M}\) is the considered architecture. The definition of \(\models _D^\mathcal{M}\) is the same as the definition of \(\models _D\) (Definition 7).

Lemma 12

Considering the monitoring architectures \(\mathcal{M} = \{M_1,M_2\}\) and \(\mathcal{M}' =\{{{\mathrm{merge}}}(M_1, M_2)\}\) (where monitors of \(\mathcal{M}\) are merged), we have:

$$\begin{aligned} \forall u\in {\varSigma }^+.\forall \varphi \in \mathrm {LTL}\,.\ u\models _D^{\mathcal{M}'}\varphi = \top /\bot \implies \forall \sigma \in {\varSigma }.\ u\cdot \sigma \models _D^{\mathcal{M}} \varphi = \top /\bot . \end{aligned}$$

Proof

This is a direct consequence of Lemma 11. Indeed, \(\mathcal{M}'\) is a one-component architecture, thus \(u\models _D^{\mathcal{M}'}\varphi = \top /\bot \) implies \(P(\varphi ,u) = \top /\bot \). Now, since \(\mathcal{M}\) is a two-component architecture, using Lemma 11, for all \(\sigma \in {\varSigma }\), there exists \(i\in [1,|\mathcal{M}|]\) s.t. \({{\mathrm{lo}}}(i,|u\cdot \sigma |, \varphi ) = \top /\bot \). That is \(u\cdot \sigma \models _D^\mathcal{M} \varphi = \bot /\top \). \(\square \)

In the remainder, we consider an n-component architecture \(\mathcal{M}\), with \(n\ge 2\) such that the priority between components is \(M_1 < M_2<\cdots <M_n\).Footnote 6

The following lemma relates verdict production in an n-component architecture and verdict production in the same architecture where the two components with the lowest priority have been merged.

Lemma 13

Let us consider the architecture \(\mathcal{M}'=\{{{\mathrm{merge}}}(M_1,M_2),M_3,\ldots ,M_n\}\) where the two components with the lowest priority \(M_1\) and \(M_2\) have been merged. We have:

$$\begin{aligned} \forall u\in {\varSigma }^+.\forall \varphi \in \mathrm {LTL}\,.\ u\models _D^{\mathcal{M}'}\varphi = \top /\bot \implies \forall \sigma \in {\varSigma }.\ u\cdot \sigma \models _D^\mathcal{M} = \top /\bot . \end{aligned}$$

Proof

We give a proof for the case where the verdict is \(\top \) (the other case is symmetrical). Let us consider \(u\in {\varSigma }^+, \varphi \in \mathrm {LTL}\,\) s.t. \(u\models _D^{\mathcal{M}'}\varphi = \top \). Let \(u'\) be the smallest prefix of u s.t. \(P(\varphi ,u' )= \top \). From the theorem about the maximal delay (Theorem 2), we have \(|u|-|u'|\le (n-1)\). Thus, each of the local obligations in architecture \(\mathcal{M}'\) will transit through at most n monitors following the ordering between components. That is, in the worst case (i.e., if a verdict is not produced before time |u|), any obligation will be progressed according to all components. More precisely, each time a local obligation is progressed on some component \(C_i\), past obligations w.r.t. component \(C_i\) are removed (Lemma 7—item 3). Using the compositionality of \({{\mathrm{rp}}}\) and the progression function on conjunction, in the worst case the local obligation at time \(|u'| +n\) will be a conjunction of formulae of the form

$$\begin{aligned} \begin{array}{l} P(\\ \quad \cdots \\ \quad \cdots P( \\ \quad \quad \quad \quad P( {{\mathrm{rp}}}(\cdots {{\mathrm{rp}}}({{\mathrm{rp}}}(\varphi ,u',i),u',i_1)\cdots ,u',i_n), u_i(|u'|), {{{AP}_{i}}})\\ \quad \quad , u_{i_1}(|u'|+1, {{{AP}_{i_1}}}),\\ \quad \cdots ,\\ u_{i_n}(|u'|+n), {{{AP}_{i_n}}})\\ \end{array} \end{aligned}$$

where \(\varphi \) is a local obligation at time \(|u'|\) and \(\{i_1,\ldots ,i_n\} \supseteq [1,|\mathcal{M'}|]\) (because of the ordering between components). According to Lemma 7—item 5, we have:

$$\begin{aligned} {{\mathrm{rp}}}(\cdots {{\mathrm{rp}}}({{\mathrm{rp}}}(\varphi ,u',i),u',i_1)\cdots ,i_n) = {{\mathrm{rp}}}(\varphi ,u') = \top . \end{aligned}$$

Following the definition of the progression function for \(\top \), necessarily, the resulting local obligation at time \(|u'|+n\) is \(\top \). \(\square \)

Lemma 14

Let us consider the architecture \(\mathcal{M}'=\{{{\mathrm{merge}}}(M_n, {{\mathrm{merge}}}(\ldots ,{{\mathrm{merge}}}(M_2,M_1)\}\), then we have:

$$\begin{aligned} \forall u\in {\varSigma }^+.\forall \varphi \in \mathrm {LTL}\,.\ u\models _D^{\mathcal{M}'}\varphi = \top /\bot \implies \forall u'\in {\varSigma }^+.\ |u'|\ge n\implies u\cdot u'\models _D^\mathcal{M} = \top /\bot . \end{aligned}$$

Proof

By an easy induction on the number of components merged using Lemma 13. \(\square \)

Proof of Theorem 4 (p. 15)

Based on Lemma14, we can easily show Theorem 4. \(\square \)

Proof

Let us consider an n-component architecture \(\mathcal{M} = \{M_1,\ldots ,M_n\}\), a trace \(u\in {\varSigma }^+\) and a formula \(\varphi \in \mathrm {LTL}\,\). Let us suppose that \(u\models _C \varphi = \top /\bot \). As the alphabets of monitors are respectively \({\varSigma }_1,\ldots ,{\varSigma }_n\) and each monitor \(M_i\) is observing a sub-trace \(u_i\) of u where the hypothesis about alphabets partitioning mentioned in Sect. 2 holds, we can consider the centralised setting equivalent to monitoring with architecture \(\mathcal{M}'=\{{{\mathrm{merge}}}(M_n, {{\mathrm{merge}}}(\ldots , {{\mathrm{merge}}}(M_2, M_1)\}\) where there is a unique monitor M observing the same trace u. Using Lemma14, from \(u\models _D^{\mathcal{M}'}\varphi = \top /\bot \), we get \(\forall u'\in {\varSigma }^+.\ |u'|\ge n\implies u\cdot u'\models _D^\mathcal{M} = \top /\bot \), as required. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bauer, A., Falcone, Y. Decentralised LTL monitoring. Form Methods Syst Des 48, 46–93 (2016). https://doi.org/10.1007/s10703-016-0253-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10703-016-0253-8

Keywords

Navigation