Correction to: Discrete Event Dyn Syst (2022) 32: 65

In (Lefebvre et al. 2022), there was a flaw in the proof of Lemma 2. The statement of Lemma 2 is correct by itself but its proof requires a slightly different definition of the e-transition probability matrix given in Definition 5. This note provides the corrections and adjusts Examples 3 and 4 accordingly. The other results, proofs, and examples in (Lefebvre et al. 2022) remain unchanged.

Correction to Definition 5 in (Lefebvre et al. 2022): Given an LCTMM G = (X, E,Λ, π0), for each event eE its e-transition probability matrix \(Q_{e} = (q_{e,i,j}) \in \mathbb{R} _{\geq 0}^{n \times n}\) (where qe, i, j is the element of matrix Qe in row i and column j) is defined by qe, i, j = μ(xi, e, xj), where for xiX, eE, and xjPost(xi), μ(xi, e, xj) is the sum of the firing rates of the e-transitions from state xi to xj (μ(xi, e, xj) = 0 if no e-transition exists from state xi to xj).

Correction to Example 3 in (Lefebvre et al. 2022): The a-transition and b-transition probability matrices of the LCTMM in Fig. 1 with alphabet E = {a, b} are the matrices Qa and Qb detailed below: ◇

$$ Q_{a} = \left[ \begin{array}{ccc} 0 & \mu_{1,1} & 0 \\ \mu_{2,1} & 0 & 0 \\ \mu_{3,3} & 0 & \mu_{3,1} \end{array}\right], \qquad Q_{b} = \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & \mu_{3,2} & 0 \end{array}\right]. $$
Fig. 1
figure 1

A labeled continuous-time Markov model

Correction to Lemma 2 in (Lefebvre et al. 2022): Consider an LCTMM G = (X, E,Λ,π0) and its e-transition probability matrices as in the revised Definition 5 above. Given an observation σ = (e, t) with eE, it holds that:

$$ \boldsymbol{\pi}(t \mid \boldsymbol{\pi}_{0}, \sigma) = \displaystyle \frac{\boldsymbol{\pi}(t^{-} \mid \boldsymbol{\pi}_{0}) \cdot Q_{e}}{\boldsymbol{\pi}(t^{-} \mid \boldsymbol{\pi}_{0}) \cdot Q_{e} \cdot \textbf{1}_{n \times 1}}, $$
(1)

where 1n×1 is the all ones column vector of dimension n.

FormalPara Proof

For each state xj of the LCTMM it holds that

$$\begin{array}{lll} &&\pi_{j}(t \mid \boldsymbol{\pi}_{0}, (e,t))\\ & = &\lim \limits_{dt \rightarrow 0} \displaystyle Pr(x(t) = x_{j} \mid (e,(t-dt, t])) \\ & = & \lim \limits_{dt \rightarrow 0} \displaystyle \frac{Pr(x(t) = x_{j} \cap (e,(t-dt, t]))}{Pr((e,(t-dt, t]))} \\ & = & \lim \limits_{dt \rightarrow 0} \displaystyle \sum\limits_{i=1}^{n} \frac{Pr((x(t) = x_{j} \cap (e,(t-dt, t])) \mid x(t-dt)=x_{i}) \cdot Pr(x(t-dt)=x_{i})}{Pr((e,(t-dt, t]))} \end{array}$$

The numerator and denominator of the previous expression are reformulated.

  • Given an infinitesimal interval dt, the quantity qe, i, jdt represents the probability that a transition to x(t) = xj occurs when event e is observed in interval (tdt, t] given that x(tdt) = xi. More formally, Pr(x(t) = xj ∩ (e,(tdt, t])∣x(tdt) = xi) = qe, i, jdt.

  • On the other hand,

    $$\begin{array}{lll} Pr((e,(t-dt, t]))&=&\displaystyle \sum\limits_{i=1}^{n} Pr((e,(t-dt, t]) \mid x(t-dt)=x_{i}) \cdot Pr(x(t-dt)=x_{i})\\ &=&\displaystyle \sum\limits_{i=1}^{n} \left( \sum\limits_{j=1}^{n}q_{e,i,j} .dt \right) \cdot Pr(x(t-dt)=x_{i}) \end{array} $$

Considering that \(\lim \limits _{dt \rightarrow 0} Pr(x(t-dt)=x_{i}) = \pi _{i}(t^{-} \mid \boldsymbol {\pi }_{0}), \) we have

$$\begin{array}{lll} \pi_{j}(t \mid \boldsymbol{\pi}_{0}, (e,t)) & =& \displaystyle \frac{{\sum}_{i=1}^{n} q_{e,i,j} \cdot \pi_{i}(t^{-} \mid \boldsymbol{\pi}_{0})} {{\sum}_{j=1}^{n}\left( {\sum}_{i=1}^{n} q_{e,i,j} \cdot \pi_{i}(t^{-} \mid \boldsymbol{\pi}_{0}) \right)} \end{array}$$

or eq. (1) in matrix form. Observe that the denominator in eq. (1) is nonzero because the event e has been observed at time t, i.e., there must exist a state xi from which a transition labeled e may occur and such that \(\pi _{i}(t^{-} \mid \boldsymbol {\pi }_{0}) > 0\). □

Correction to Example 4 in (Lefebvre et al. 2022): Consider the LCTMM in Fig. 1 with firing rates μ1,1 = 2, μ3,1 = 3, all other rates being equal to 1, and sequence of observations σ = (a,1)(b,3)(a,4)(a,5) within the time interval [0,7]. The state probabilities are reported in Fig. 2.

Fig. 2
figure 2

State probabilities with respect to σ = (a,1)(b,3)(a,4)(a,5), x1: top, x2: center, x3: bottom

In order to illustrate how the time stamps of the observations influence the probabilities of the states, consider also the sequence of observations σ = (a, t1) with several values of t1 within the time interval [0,4]. Observe in Fig. 3 that the probability of x3 at time t = 4 changes depending on the value of t1.

Fig. 3
figure 3

Probability of x3 with respect to σ = (a, t1) with t1 = 3 (top), t1 = 2 (center) and t1 = 1 (bottom)

FormalPara Comments on the corrections:

Let us consider some basic cases that explain and illustrate Definition 5 and Lemma 2.

Consider the LCTMM in Fig. 4(a) with π0 = [1 0 0] where a and b are two observable labels. No label is observed up to time t, we have \(\boldsymbol {\pi }(t^{-} \mid \boldsymbol {\pi }_{0}) = [ 1~0~0]\) because there exists no silent evolution from state x1. When a label a is observed at t we will obtain π(tπ0, σ) = [0 1 0] with σ = (a, t). According to Eq. (1) this can be written as

$$ \boldsymbol{\pi}(t \mid \boldsymbol{\pi}_{0}, \sigma) = \displaystyle \frac{ \left[ \begin{array}{ccc} 1 &0 &0 \end{array}\right] \cdot \left[ \begin{array}{ccc} 0 & \mu & 0 \\ 0 & 0 &0 \\ 0 &0 &0 \end{array}\right] } { \left[ \begin{array}{ccc} 1 &0 &0 \end{array}\right] \cdot \left[ \begin{array}{ccc} 0 & \mu & 0 \\ 0 & 0 &0 \\ 0 &0 &0 \end{array}\right] \cdot \left[ \begin{array}{c} 1 \\ 1\\ 1 \end{array}\right] } = \left[ \begin{array}{ccc} 0 &1& 0 \end{array}\right]. $$
(2)
Fig. 4
figure 4

Three simple examples

Note that the probability Pr((a,(tdt, t]) to observe a within (tdt, t] assuming that nothing was observed before time tdt (and consequently that the system stays at x1 before tdt) is equal to the probability that the delay of a is smaller than dt (which is μdt) and that the delay of b is greater than dt (which is \(1 - \mu ^{\prime } dt\)). Since the events a and b are independent and dt is an infinitesimal duration, we have: \(Pr(a,dt) = (\mu dt) \cdot (1 - \mu ^{\prime } dt) = \mu dt - \mu \mu ^{\prime } dt^{2} \approx \mu dt\).

Consider the LCTMM in Fig. 4(b) with π0 = [1 0 0] where a is the single observable label. No label a is observed, we have \(\boldsymbol {\pi }(t^{-} \mid \boldsymbol {\pi }_{0}) = [ 1~0~0]\). When a label a is observed at t we will obtain π(tπ0, σ) with σ = (a, t) according to Equation (1):

$$ \boldsymbol{\pi}(t \mid \boldsymbol{\pi}_{0}, \sigma) = \displaystyle \frac{ \left[ \begin{array}{ccc} 1 &0 &0 \end{array}\right] \cdot \left[ \begin{array}{ccc} 0 & \mu & \mu^{\prime} \\ 0 & 0 &0 \\ 0 &0 &0 \end{array}\right] } { \left[ \begin{array}{ccc} 1 &0 &0 \end{array}\right] \cdot \left[ \begin{array}{ccc} 0 & \mu & \mu^{\prime} \\ 0 & 0 &0 \\ 0 &0 &0 \end{array}\right] \cdot \left[ \begin{array}{c} 1 \\ 1\\ 1 \end{array}\right] } = \left[ \begin{array}{ccc} 0 &\frac {\mu}{\Delta}&\frac {\mu^{\prime} }{\Delta} \end{array}\right] $$
(3)

with Δ = μ + μ.

Consider finally the LCTMM in Fig. 4(c) with π0 = [1 0 0]. This example evolves exactly as the example in Fig. 4(b) up to the first observation of the label a at time t. From that time, and despite the fact that no silent transition exists in this system, the probability of the states x2 and x3 will change depending on the values of μ and \(\mu ^{\prime }\) and according to the extended ε sub-chain of the system (Definition 4 in (Lefebvre et al. 2022)). In particular, for a given value of time \(t^{\prime } \geq t\), there exists \(\alpha _{t^{\prime }} \in [0, 1]\) such that \(\boldsymbol {\pi }(t^{\prime -} \mid \boldsymbol {\pi }_{0}, (a,t)) =[0~\alpha _{t^{\prime }}~1-\alpha _{t^{\prime }}]\). When a second label a is observed at \(t^{\prime }\) we will obtain \(\boldsymbol {\pi }(t^{\prime } \mid \boldsymbol {\pi }_{0}, \sigma ) \) with \(\sigma = (a,t) (a,t^{\prime })\) that can be written as

$$ \boldsymbol{\pi}(t \mid \boldsymbol{\pi}_{0}, \sigma) = \displaystyle \frac{ \left[ \begin{array}{ccc} 0&\alpha_{t^{\prime}} &1-\alpha_{t^{\prime}} \end{array}\right] \cdot \left[ \begin{array}{ccc} 0 & \mu & \mu^{\prime} \\ 0 & \mu &0 \\ 0 &0 &\mu^{\prime} \end{array}\right] } { \left[ \begin{array}{ccc} 0&\alpha_{t^{\prime}} &1-\alpha_{t^{\prime}} \end{array}\right] \cdot \left[ \begin{array}{ccc} 0 & \mu & \mu^{\prime} \\ 0 & \mu &0 \\ 0 &0 &\mu^{\prime} \end{array}\right] \cdot \left[ \begin{array}{c} 1 \\ 1\\ 1 \end{array}\right] } = \left[ \begin{array}{ccc} \frac {\mu \alpha_{t^{\prime}}}{{\Delta}^{\prime}} &0&\frac {\mu^{\prime} (1-\alpha_{t^{\prime}}) }{{\Delta}^{\prime}} \end{array}\right] $$
(4)

with \({\Delta }^{\prime } = \mu \alpha _{t^{\prime }} +\mu ^{\prime } (1-\alpha _{t^{\prime }})\).