Abstract
We develop a probabilistic information framework via what we call irrevocably modulated filtrations produced by non-invertible matrix-valued jump processes acting on multivariate observation processes carrying noisy signals. Under certain conditions, we provide dynamical representations of conditional expectation martingales in systems where signals from randomly changing information networks may get irreversibly amalgamated or switched-off over random time horizons. We apply the framework to scenarios where the flow of information goes through multiple modulations before reaching observing agents. This leads us to introduce a Lie-type operator as a morphism between spaces of sigma-algebras, which quantifies information discrepancy caused by different modulation sequences. As another example, we show how random graphs can be used to generate irrevocably modulated filtrations that lead to pure noise scenarios. Finally, we construct systems that exhibit gradual decay of additional sources of information through the choice of spectral radii of the modulators.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It is common to observe random systems in physics, engineering, neuroscience, biology and economics, where agents in the environment, in whichever form they can be represented by, are governed by the processing of complex information on a dynamic basis. This is perhaps one of the reasons why we see a significant collection of scientific literature devoted to some form of stochastic filtering; e.g., see [8] for an account of Kalman filter alone.
Numerous theoretical and practical questions can be studied as part of information processing, e.g., properties of the observation process (what is the process), the nature of information dissemination (how it diffuses), characteristics and objectives of the agents (who processes the information) and so forth. For example, a vector-valued observation process, say \(\{\varvec{\xi }_t\}_{t\ge 0}\) commonly takes a linear form \(\varvec{\xi }_t = \varvec{Z} + \varvec{\eta }_t\), where \(\varvec{Z}\) is some random vector representing the signal (e.g., message, image, payoff etc.) and \(\{\varvec{\eta }_t\}_{t\ge 0}\) is some independent stochastic process representing noise (see [1, 2, 7, 9, 22] and references therein)—it is also possible to systematically go beyond an additive form for observation processes and study their mathematical properties (see [23, 25, 36]). For the aforementioned who aspect, a natural habitat for information processing arises in control problems, where agents aim to maximize an objective (e.g., a utility function) and make decisions accordingly (see [3, 4, 6, 14, 33] and references therein). This paper is concerned with the how aspect; given an observation process in some form, how does information spread out from where it originates to where it reaches. We focus on finite-time systems due their practical relevance in inference problems, where signals are revealed without noise only at some future point in time—models over infinite-time horizons can be embedded into our framework just as well.
In [34], a relevant setup has been constructed by allowing information to dynamically switch on and off, where a stochastic differential equation (SDE) was derived for a class of conditional expectation martingales (i.e., the filter) governed by this mechanism—let us recall the SDE of [34][Proposition 1.1]: Let Z be some square-integrable random variable (i.e., the signal), \(\{\varvec{P}_t\}_{t\in [0,T]}\in \{0,1\}^n\) be a mutually independent vector-valued càdlàg stochastic jump process with \(T<\infty \), and \(\{\varvec{\xi }_t\}_{t\in [0,T]}\in \mathbb {R}^n\) be a noisy observation process such that each coordinate \(\{\xi ^{(i)}_t\}_{t\in [0,T]}\in \mathbb {R}\) takes the anticipative form \(\xi ^{(i)}_t = Z(t/T) + \sigma ^{(i)}\beta ^{(i)}_{tT}\), where \(\{\beta ^{(i)}_{tT}\}_{t\in [0,T]}\) is an independent Brownian bridge noise and \(\sigma ^{(i)}\in (0,\infty )\) is the signal-to-noise ratio for \(i=1,\ldots ,n\). For the rest of the paper, we use \(\otimes \) to denote the Hadamard product of \(\mathbb {R}^{n}\)-valued vectors—i.e., the Hadamard product of two \(\mathbb {R}^n\)-valued vectors is an \(\mathbb {R}^n\)-valued vector produced by element-wise multiplications.
Proposition 1.1
Let the filtration \(\{\mathcal {F}_t^{\varvec{P},\varvec{\xi }}\}_{t\in [0,T]}\) be given by
and define \({\mathcal {J}}_t = \{i : P_t^{(i)} = 1\}\), \(C_t = \sum _{u\le t} \mathbbm {1}\{\varvec{P}_{u}\ne \varvec{P}_{u-}\}\) and \(\Lambda _k = \inf \{t:C_t=k\}\) with \(\Lambda _0=0\). Then, the \(\{\mathcal {F}_t^{\varvec{P},\varvec{\xi }}\}\)-martingale \(\{Z^{\varvec{P}}_t\}_{t\in [0,T]}\) where \(Z^{\varvec{P}}_t:={\mathbb {E}}[Z \,|\, \mathcal {F}_t^{\varvec{P},\varvec{\xi }}]-{\mathbb {E}}[Z]\) admits the representation
for \(t<T\), where \(\{W_t^{(k-1)}\}\) is an \(\{\mathcal {F}_t^{\varvec{P},\varvec{\xi }}\}\)-Brownian motion between jump times, \(\{\tilde{\Theta }^{(k-1)}_t\}\) is an \(\{\mathcal {F}_t^{\varvec{P},\varvec{\xi }}\}\)-adapted process for \(k=1,\ldots ,C_t+1\), and \(\Delta Z^{\varvec{P}}_t\ne 0\) if and only if \({\mathcal {J}}_t \backslash {\mathcal {J}}_{t-} \ne \emptyset \).
Each process \(\{\xi ^{(i)}_t\}_{t\in [0,T]}\) above is called a Brownian information process (BRB) of the Brody–Hughston information-based framework, which forms the basis of a fruitful research stream that found numerous applications and developments in quantum measurement theory and stochastic reduction dynamics in [10, 11, 13, 29], mathematical finance and price dynamics in [18, 19, 21], theory of stochastic processes in [23, 24, 30, 35] and Markovian optimal election problems of [31].
In Proposition 1.1, the flow of information is dynamically modulated by being switched on and off randomly before reaching observing agents, who make their inference on Z based on the filtration \(\{\mathcal {F}_t^{\varvec{P},\varvec{\xi }}\}_{t\in [0,T]}\) on an \({\mathcal {L}}^2\)-best-estimate basis. Note that \(\varvec{\xi }_t\) can be recovered from \(\mathcal {F}_t^{\varvec{P},\varvec{\xi }}\) if and only if \(\varvec{P}_t={\textbf {1}}\) for any \(t\in [0,T]\). In this paper, we shall extend this behavior significantly i) by augmenting \(\{\varvec{\xi }_t\}_{t\in [0,T]}\) from BRBs to Gaussian random bridges (GRBs) of [30] that generalize Gaussian processes including Gaussian bridges of [17, 28] and ii) through what we call irrevocably modulated filtrations by using degenerate matrix-valued stochastic processes \(\{\varvec{\hat{P}}_t\}_{t\in [0,T]}\in \mathbb {R}^{n\times n}\) (matrices will be denoted with the \({\hat{\,}}\) hat) acting on \(\{\varvec{\xi }_t\}_{t\in [0,T]}\), where filtrations in a system are more broadly given by the following:
that can lead to dynamic affine mixing of information. This leads to randomly formed dynamic information networks that bring forth numerous non-trivial complexities, where closed-form solutions for the conditional expectations are no longer available in general, even in the pure Markovian case. We shall provide key conditions for the setup to remain analytical for explicit computations, without necessarily having to resort to numerical methods. It turns out that one such condition is for the rows of the singular matrix \(\{\varvec{\hat{P}}_t\}_{t\in [0,T]}\) to be mutually orthogonal—which the system in [34] happens to satisfy. In fact, for GRBs with continuous paths, this will show that the SDE in Proposition 1.1 is of a fundamental structure, where \(\{\varvec{P}_t\}_{t\in [0,T]}\in \mathbb {R}^n\) takes the more general form
given that \(\varvec{P}_t^{(i)}\in \mathbb {R}^n\) is the vector constructed from the ith row of the matrix \(\varvec{\hat{P}}_t\). Thus, we can reach a family of SDEs that generalize the functional form as given in Proposition 1.1 by introducing a modulated GRB \(\{\varvec{\bar{\xi }}_t\}_{t\in [0,T]}\) (as will be detailed), and a mapping between two modulated sigma-algebras as follows:
We shall highlight a related stream of research called compressed sensing (see [12, 15, 16, 26, 27]) providing important results on signal recovery in undetermined linear systems, where we encounter noisy observations \(\varvec{\xi } = \varvec{\hat{A}}\varvec{Z} + \varvec{\eta }\), with \(\varvec{\hat{A}}\) an \(m\times n\) matrix for \(m<n\). This stream studies regularization problems of the form \(\text {min}\Vert \varvec{Z}\Vert _{{\mathcal {L}}^1}\) such that \(\Vert \varvec{\hat{A}}\varvec{Z} - \varvec{\xi }\Vert _{{\mathcal {L}}^2} \le \epsilon \in \mathbb {R}\), and hence, differs from ours in their focus and direction, but the mathematical setups have overlaps, where compressed sensing can be studied as a specific scenario in our dynamic modulation framework.
2 Stochastic Setup
We adopt the convention \(\sup \emptyset = -\infty \) and work on a probability space \((\Omega ,\mathcal {F},\mathbb {P})\) equipped with a filtration \(\{\mathcal {F}_{t}\}_{t \le \infty }\), where \(\mathcal {F}_{\infty }=\mathcal {F}\). We work with right-continuous complete filtrations and stay over a finite-time interval \(\mathbb {T}=[0,T]\) for some \(T<\infty \); we also denote \(\mathbb {T}_{-}=[0,T)\). We choose \(\omega \in \Omega \) outside \({\mathbb {P}}\)-null sets and drop it from relevant expressions.
For modeling signals, we employ some random vector \(\varvec{Z}\in {\mathcal {L}}^2(\Omega ,\mathcal {F},\mathbb {P})\) with law \(\varvec{\nu }\ll \mathbb {P}\) and a state-space \(({\mathbb {R}}^n,{\mathcal {B}}({\mathbb {R}}^n))\), where \({\mathcal {B}}({\mathbb {R}}^n)\) is the Borel \(\sigma \)-field. We denote the marginals of \(\varvec{Z}\) as \(Z^{(i)}\) for \(i=1,\ldots ,n\). Every stochastic process we consider has càdlàg paths; that is, if \(\{\mathbf {X}_t\}_{t\in \mathbb {T}}\) is a stochastic process, then for any \(\omega \in \Omega \) we have \(X_.(\omega )\in {\mathcal {D}}(\mathbb {T},\mathbb {R}^n)\), where \({\mathcal {D}}(\mathbb {T},\mathbb {R}^n)\) is the Skorokhod space of \(\mathbb {R}^n\)-valued càdlàg paths for \(n\in {\mathbb {N}}_+\). We note that the law of càdlàg processes can be fully characterized by their finite-dimensional distributions using Kolmogorov extension theorem, which is highly useful for our framework since, for example, the definition of \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) (Definition 2.1 below) relies on finite-dimensional distributions. In addition, since every càdlàg process that is adapted is also progressively measurable, stochastic integrals (e.g., integrals involving modulated information processes) that will appear in this paper are adapted, a property we need for dynamical representations of \({\mathcal {L}}^2\)-best-estimates. We shall write \(\mathcal {F}_{t}^{\mathbf {X}}=\sigma (\{\mathbf {X}_s\}:0\le s \le t)\) for the natural filtration, where \(\mathcal {F}_{t}^{\mathbf {X}}\subset \mathcal {F}_{t}\). We let \(\{\mathbf {X}_t\}_{t\in \mathbb {T}}\) be a multivariate Gaussian process with a positive-definite covariance kernel \(\hat{{\textbf {K}}}_{s,t}\) for all \(\{s,t\}\in \mathbb {T}\), where \(\mathbb {E}[\mathbf {X}_t]=\varvec{0}\) for any \(t\in \mathbb {T}\) without loss of generality. The coordinates of \(\{\mathbf {X}_t\}_{t\in \mathbb {T}}\) are denoted as \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) for \(i=1,\ldots ,n\). Finally, we shall use \({\mathcal {C}}^{k}(\mathbb {T},\mathbb {R}^n)\subset {\mathcal {D}}(\mathbb {T},\mathbb {R}^n)\) for \(\mathbb {R}^n\)-valued continuous functions with continuous derivatives up to kth order. To model noisy observations on \(\varvec{Z}\), we call GRBs from [30] due to their canonical anticipative form that is natural for signal processing (see Proposition 2.2).
Definition 2.1
Let \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) be an \(\mathbb {R}^n\)-valued \(\{\mathcal {F}_{t}\}\)-adapted stochastic process such that: i) \(\varvec{\xi }_T = \varvec{Z}\), and ii) For all \(m\in {\mathbb {N}}_{+}\), every \(0\le t^{(i)}_{1}<\ldots<t^{(i)}_{m}<T\) and \((x^{(i)}_{1},\ldots ,x^{(i)}_{m})\in \mathbb {R}^m\) for \(i=1\ldots ,n\),
\(\varvec{\nu }\)-a.e. \(\varvec{z}\). Then, \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is an \(\mathbb {R}^n\)-valued GRB, with \(\mathcal {F}_{t}^{\varvec{\xi }}=\sigma (\{\varvec{\xi }_s\}:0\le s \le t)\) for any \(t\in \mathbb {T}\).
When \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is viewed as a noisy observation process, each coordinate can be viewed as an information source. We assume that each \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) is mutually independent of \(Z^{(i)}\) for \(i=1,\ldots ,n\)—as such, we refer to \(\{\mathbf {X}_t\}_{t\in \mathbb {T}}\) as the driving noise of \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\).
Proposition 2.2
Let \(\varvec{\hat{K}}^{*}_{t,T}=\varvec{\hat{K}}_{t,T}\varvec{\hat{K}}^{-1}_{T,T}\), where \(\varvec{\hat{K}}^{-1}_{T,T}\) is the inverse. Then, \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) admits the anticipative representation
Proof
The result follows from Definition 2.1 and the orthogonal decomposition of Gaussian processes that applies since \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) given \(\varvec{Z}=\varvec{z}\) is Gaussian with \(\varvec{\hat{K}}_{T,T}\) being non-singular. \(\square \)
Remark 2.3
Neither \(\varvec{X}_t\) nor \(\varvec{X}_T\) are \(\mathcal {F}_t^{\varvec{\xi }}\)-measurable for \(t\in \mathbb {T}\). The process \(\{\varvec{X}_t - \varvec{\hat{K}}^{*}_{t,T}\varvec{X}_T\}_{t\in \mathbb {T}}\) is a Gaussian bridge (see [17]), which can also be interpreted as noise. The signal \(\varvec{Z}\) is not \(\mathcal {F}_t^{\varvec{\xi }}\)-measurable for \(t\in \mathbb {T}_{-}\) and is revealed without noise at \(t=T\) due to the representation in (4).
Note that GRBs are not necessarily Gaussian, but they generalize their corresponding drivers over finite-time horizons; e.g., GRBs are Gaussian when \(\varvec{Z}\) is (jointly) Gaussian.
Remark 2.4
If \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) are mutually independent, \(\varvec{\hat{K}}^{*}_{t,T}\) is a diagonal matrix. Hence, having \(K^{*(i,i)}_{t,T}=K^{(i,i)}_{t,T}/K^{(i,i)}_{T,T}\), each coordinate \(\{\xi _t^{(i)}\}_{t\in \mathbb {T}}\) for \(i=1,\ldots ,n\) is an \(\mathbb {R}\)-valued GRB satisfying the anticipative representation
Clearly, any given \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is not necessarily Markov with respect to \(\{\mathcal {F}_t^{\varvec{\xi }}\}\), and to work with Markov GRBs, the following result is what we need.
Lemma 2.5
Any \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is Markov with respect to \(\{\mathcal {F}_t^{\varvec{\xi }}\}\) if and only if its driving noise \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) is Markov with respect to \(\{\mathcal {F}_t^{\varvec{X}}\}\).
Proof
See Appendix. \(\square \)
As a highly relevant result for the next section, we will need the following special Markovian property for \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\), where each coordinate may have its own time-change. For this, we ask the following condition from \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\).
Definition 2.6
We say \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) satisfies the time-changed Markov property at \(t\in \mathbb {T}\), if
for any \(0 \le t^{(i)}_{1}< \cdots< t^{(i)}_{k_i} < t \le T\), for \(i=1,\ldots ,n\).
Lemma 2.7
Let \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) satisfy the time-changed Markov property in (6) at \(t=T\). Then, for any \(\varvec{B}\in {\mathcal {B}}({\mathbb {R}}^n)\) and \(t^{(i)}\in \mathbb {T}_{-}\),
Proof
See Appendix. \(\square \)
Corollary 2.8
If \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) are mutually independent and Markov with respect to \(\{\mathcal {F}_t^{X^{(i)}}\}\) for \(i=1,\ldots ,n\), then (7) holds for any \(\varvec{B}\in {\mathcal {B}}({\mathbb {R}}^n)\) and \(t^{(i)}\in \mathbb {T}_{-}\).
Proof
If \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) are mutually independent for \(i=1,\ldots ,n\),
and hence, if also \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) is Markov with respect to \(\{\mathcal {F}_t^{X^{(i)}}\}\), then the time-changed Markov property in Definition 2.6 holds. \(\square \)
Definition 2.9
Let \(\vartheta :\mathbb {R}^n\rightarrow \mathbb {R}^n\) be an affine map and \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) satisfy property (6) at \(t\in \mathbb {T}\). If the transformed process \(\{\vartheta (\varvec{X}_t)\}_{t\in \mathbb {T}}\) also satisfies the time-changed Markov property at \(t\in \mathbb {T}\), we say \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) is time-changed-Markov-invariant under \(\vartheta \).
Example 2.10
Markov subclasses of Volterra processes preserve Markov property under affine transformations. That is, if we have the kernel decomposition
for \(i=1,\ldots ,n\), where each \(\{W^{(i)}_t\}_{t\in \mathbb {T}}\) is a Brownian motion, then the linear combination
is itself a Markov process, where \(\{\bar{W}_t\}_{t\in \mathbb {T}}\) is a Brownian motion.
Next, we introduce the \((\mathcal {F}_t^{\varvec{\xi }},\mathbb {P})\)-martingale \(\{Z_t\}_{t\in \mathbb {T}}\) by \(Z_t=\mathbb {E}[g(\varvec{Z}) \, | \, \mathcal {F}_t^{\varvec{\xi }} ]\) for a measurable map \(g:\mathbb {R}^n\rightarrow \mathbb {R}\), which is the \({\mathcal {L}}^2\)-best-estimate of the signal \(\varvec{Z}\), given \(\mathcal {F}_t^{\varvec{\xi }}\). Only when \(g=1_{(.)}\), which is the indicator function, we shall denote this martingale by \(\pi _t(\mathrm {d}\varvec{z})= \mathbb {E}[ 1_{(\varvec{Z}\in \mathrm {d}\varvec{z})} \, | \, \mathcal {F}_t^{\varvec{\xi }} ]=\mathbb {P}[\varvec{Z}\in \mathrm {d}\varvec{z} \, | \, \mathcal {F}_t^{\varvec{\xi }} ]\) to distinguish it as a measure. We are interested in dynamical SDE representations of filters such as \(\{Z_t\}_{t\in \mathbb {T}}\) in this paper, and we will need \(\{\xi ^{(i)}_t\}_{t\in \mathbb {T}}\) to be an \((\mathcal {F}_t^{\xi ^{(i)}},\mathbb {P})\)-semimartingale to apply Itô calculus – but \(\{\xi ^{(i)}_t\}_{t\in \mathbb {T}}\) in general is not an \((\mathcal {F}_t^{\xi ^{(i)}},\mathbb {P})\)-semimartingale.
Example 2.11
If \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) is a mutually independent \((\mathcal {F}_t^{X^{(i)}},\mathbb {P})\) fractional Brownian motion with Hurst exponent \(H\in (0,1)\), then \(\{\xi ^{(i)}_t\}_{t\in \mathbb {T}}\) admits the following representation:
Unless \(H=1/2\), \(\{\xi ^{(i)}_t\}_{t\in \mathbb {T}}\) is not an \((\mathcal {F}_t^{\xi ^{(i)}},\mathbb {P})\)-semimartingale. Since \(\{X^{(i)}_{t}\}_{t\in \mathbb {T}}\) is an H-self-similar process, we have
given that \(t_*=t/T\) – see [17]. Hence, \(\{\xi ^{(i)}_t\}_{t\in \mathbb {T}}\) satisfies a scaling property such that we have
The following result as a corollary to [36][Proposition 2.14] is very useful for our purposes.
Proposition 2.12
If \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) is an \((\mathcal {F}_t^{\varvec{X}},\mathbb {P})\)-semimartingale, \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is an \((\mathcal {F}_t^{\varvec{\xi }},\mathbb {P})\)-semimartingale.
For the statement below, we fix \(n=1\) and drop the (i) superscript for notational convenience—multivariate expressions will naturally appear in the next section. Also, we work with \(\{X_t\}_{t\in \mathbb {T}}\) with continuous paths, but the statement can be extended for \(\{X_t\}_{t\in \mathbb {T}}\) with jumps, as long as \(\{X_t\}_{t\in \mathbb {T}}\) is an \((\mathcal {F}_t^{X},\mathbb {P})\)-semimartingale.
Lemma 2.13
If \(\{X_t\}_{t\in \mathbb {T}}\) is a Markov \((\mathcal {F}_t^{X},\mathbb {P})\)-semimartingale with quadratic variation \(\{Q_t\}_{t\in \mathbb {T}}\) and \(K\in {\mathcal {C}}^{1}(\mathbb {T})\), there exists an \((\mathcal {F}_t^{\xi },\mathbb {P})\)-semimartingale \(\{S_t(z)\}_{t\in \mathbb {T}_{-}}\) with quadratic variation \(\{Q_t\}_{t\in \mathbb {T}_{-}}\) for any \(z\in \mathbb {R}\), such that
for \(t\in \mathbb {T}_{-}\), where \(\sigma _t(\xi ,z)=K^{\delta }_{t,T}(z - \mathbb {E}[ Z \, | \, \xi _t ])\) with \(K^{\delta }_{t,T} = K_{t,T}(K_{t,t}K_{T,T} - K_{t,T}K_{t,T})^{-1}\).
Proof
See Appendix. \(\square \)
Proposition 2.14
Let \(\{X_t\}_{t\in \mathbb {T}}\) be a Markovian \((\mathcal {F}_t^{X},\mathbb {P})\)-martingale, where \(Q\in {\mathcal {C}}^{1}(\mathbb {T})\). Then, \(\{Z_t\}_{t\in \mathbb {T}_{-}}\) satisfies the non-anticipative representation
for \(t\in \mathbb {T}_{-}\), where \(\tilde{\Theta }_u(\xi _t,Z)=(Q_t^{'})^{1/2}(Q_T - Q_t)^{-1}\Theta _t(\xi _t,Z)\), with \(\Theta _t(\xi _t,Z)=Cov [g(Z), Z \, | \, \xi _t]\), and \(\{W_t\}_{t\in \mathbb {T}_{-}}\) is a \((\mathcal {F}_t^{\xi },\mathbb {P})\)-Brownian motion. In addition, the process \(\{\Theta _t(\xi _t,Z)\}_{t\in \mathbb {T}_{-}}\) is an \((\mathcal {F}_t^{\xi },\mathbb {P})\)-supermartingale.
Proof
We have \(\{X_t\}_{t\in \mathbb {T}}\) a \((\mathcal {F}_t^{X},\mathbb {P})\)-martingale and a Markov process. If \(Q\in {\mathcal {C}}^{1}(\mathbb {T})\), using Lemma 2.13, we have the following:
where \(K^{\delta }_{t,T}=(Q_T-Q_t)^{-1}\), \(\tilde{\sigma }_t(\xi ,z)=(Q_t^{'})^{1/2}\sigma _t(\xi ,z)\), \(Q^{'}_t=\mathrm {d}Q_t/\mathrm {d}t\) and \(\{W_t\}_{t\in \mathbb {T}_{-}}\) is a \((\mathcal {F}_t^{\xi },\mathbb {P})\)-Brownian motion—which follows from Lévy characterization, Dambis–Dubins–Schwarz theorem and [36], given that Gaussian martingales satisfy \(K_{s,t}=Q_{s\wedge t}\) for any \(s,t\in \mathbb {T}\). The SDE in (15) is from Lebesgue integration on (16). The \((\mathcal {F}_t^{\xi },\mathbb {P})\)-supermartingale property of \(\{\Theta _t(\xi _t,Z)\}_{t\in \mathbb {T}_{-}}\) follows from Doob–Meyer decomposition theorem. \(\square \)
Remark 2.15
If \(\{X_t\}_{t\in \mathbb {T}}\) is an \((\mathcal {F}_t^{X},\mathbb {P})\)-Brownian motion, we are at the intersection of GRBs and Lévy random bridges of [23]. In this case, \(Q_t=t\) and \(Q^{'}_t=1\) for all \(t\in \mathbb {T}\), and we get expressions as provided in the Brody–Hughston information framework [11, 18].
3 Irrevocably Modulated Information
Let \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\in \mathbb {R}^{n\times n}\) be a mutually independent matrix-valued stochastic process with finite activity where \(\mathcal {F}_{t}^{\varvec{\hat{P}}}=\sigma (\{\varvec{\hat{P}}_s\}:0\le s \le t)\), that is singular for all \(t\in \mathbb {T}\), except when it is the identity matrix—i.e., when \(\varvec{\hat{P}}_t = \varvec{\hat{I}}_t\) for the identity matrix \(\varvec{\hat{I}}_t\). When we say that the matrix-valued random variable \(\varvec{\hat{P}}_t\) is mutually independent, we do not mean that the elements of \(\varvec{\hat{P}}_t\) are mutually independent with respect to each other (i.e., entries of the matrix can be dependent on one another), but rather that these elements are mutually independent with respect to \(\varvec{Z}\) and \(\varvec{\xi }_t\) for every \(t\in \mathbb {T}\).
We denote the space of all possible \(\varvec{\hat{P}}_t\)’s as \({\mathcal {P}}(n)\), write \(\varvec{P}_t^{(i)}\in \mathbb {R}^n\) as the \(n\times 1\) vector formed from the ith row of \(\varvec{\hat{P}}_t\), and denote its (i, j)th element as \(P_t^{(i,j)}\in \mathbb {R}\) for \(i,j=1,\ldots ,n\). We can implicitly consider \(\mathbb {R}^{m\times n}\) matrices for \(m < n\) by fixing \((m-n)\) rows of \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) to the zero vector \({\textbf {0}}^{\top }\)—i.e., we can represent scenarios we see in compressed sensing (see [15]) while remaining in \({\mathcal {P}}(n)\). We now introduce the following main object.
Definition 3.1
Let \(\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }} = \sigma ( \{\varvec{\hat{P}}_u\,\varvec{\xi }_u\}_{0\le u \le t}, \{\varvec{\hat{P}}_u\}_{0\le u \le t})\). We call \(\{\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\}_{t\in \mathbb {T}}\) an irrevocably modulated filtration with respect to \(\{\mathcal {F}_t^{\varvec{\xi }}\}_{t\in \mathbb {T}}\), if \(\mathbb {P}(\varvec{\hat{P}}_t= \varvec{\hat{I}}_t)=0\) for any \(t\in \mathbb {T}\).
Whenever \(\varvec{\hat{P}}_t\ne \varvec{\hat{I}}_t\), the pair (\(\varvec{\hat{P}}_t\varvec{\xi }_t\), \(\varvec{\hat{P}}_t\)) does not provide full knowledge on the original vector \(\varvec{\xi }_t\) for any \(t\in \mathbb {T}\). Therefore, the action of \(\varvec{\hat{P}}_t\) in general shrinks the information content of \(\varvec{\xi }_t\) irreversibly at that \(t\in \mathbb {T}\), especially since \(\varvec{\hat{P}}_t\) is mutually independent of \(\varvec{Z}\) and \(\varvec{\xi }_t\) for every \(t\in \mathbb {T}\). For the remaining, to keep the special case of full access to the original \(\varvec{\xi }_t\), we shall not necessarily impose \(\mathbb {P}(\varvec{\hat{P}}_t = \varvec{\hat{I}}_t)=0\), but always keep \(\mathbb {P}(\varvec{\hat{P}}_t \ne \varvec{\hat{I}}_t)\ge 0\) for any \(t\in \mathbb {T}\).
Remark 3.2
If \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) is diagonal, then \(\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}=\sigma ( \{\varvec{P}_u \otimes \,\varvec{\xi }_u\}_{0\le u \le t}, \{\varvec{P}_u\}_{0\le u \le t} )\), where \(\{\varvec{P}_t\}_{t\in \mathbb {T}}\in \mathbb {R}^n\) is a vector-valued process having the same law of \(\{diag (\varvec{\hat{P}}_t)\}_{t\in \mathbb {T}}\); where \(diag (\varvec{\hat{P}}_t)\) is the \(\mathbb {R}^n\)-valued diagonal of \(\varvec{\hat{P}}_t\) for any \(t\in \mathbb {T}\). Therefore, \(\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\) is a generalization of the \(\sigma \)-algebra given in Proposition 1.1.
Since \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) is mutually independent of \(\varvec{Z}\), its standalone appearance in \(\{\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\}_{t\in [0,T]}\) does not help with inference. Thus, even if agents in the system are aware of the modulation itself, they still cannot resolve the irrevocably modified information that is either switched-off or mixed via a linear combination. This simplifies calculations without losing the crux of our objective, and in fact, in certain cases, it does not even matter whether we separately add \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) or not, as stated below.
Remark 3.3
Let \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) be diagonal such that \(P_t^{(i,i)}\in \{0,1\}\) for all \(t\in \mathbb {T}\) for \(i=1,\ldots ,n\). Then, \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) is \(\{\sigma ( \{\varvec{\hat{P}}_u \,\varvec{\xi }_u\}_{0\le u \le t})\}_{t\in \mathbb {T}}\)-adapted if and only if \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is continuous and \(\mathbb {P}[\xi _t^{(i)}=0]=0\) for all \(t\in (0,T]\) and \(i=1,\ldots , n\).
We can now define \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\) as the \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-martingale given by \(Z^{\varvec{\hat{P}}}_t=\mathbb {E}[g(\varvec{Z}) \, | \, \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}]\), for some measurable \(g:\mathbb {R}^n\rightarrow \mathbb {R}\). Accordingly, we define the \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-martingale \(\{\pi ^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\) by \(\pi ^{\varvec{\hat{P}}}_t(\,\mathrm {d}\varvec{z})=\mathbb {P}(\varvec{Z} \in \,\mathrm {d}\varvec{z} \, | \, \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }})\), when g(.) is the indicator function.
Proposition 3.4
\(Z^{\varvec{\hat{P}}}_t = \mathbb {E}[Z_t \,|\, \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}]\).
Proof
Let \(\{\mathcal {H}_t\}_{t\in \mathbb {T}}\) be \(\mathcal {H}_t = \sigma ( \{\varvec{\xi }_u\}_{0\le u \le t}, \{\varvec{\hat{P}}_u\}_{0\le u \le t})\). Since \(\{\varvec{\hat{P}}_t\,\varvec{\xi }_t\}_{t\in \mathbb {T}}\) projects \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) to an information subspace, \(\{\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\}_{t\in \mathbb {T}}\subseteq \{\mathcal {H}_t\}_{t\in \mathbb {T}}\) is an enlargement. Thus, \(Z^{\varvec{\hat{P}}}_t = \mathbb {E}[Z^*_t \,|\, \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}]\), where \(Z^*_t = \mathbb {E}[g(\varvec{Z}) \, | \, \mathcal {H}_t]\). We further have \(Z^*_t=Z_t\) since \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) is mutually independent. \(\square \)
Remark 3.5
If \(\mathbb {P}(\varvec{\hat{P}}_t = \varvec{\hat{I}}_t)=1\) for all \(t\in \mathbb {T}\), \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}=\{Z_t\}_{t\in \mathbb {T}}\) \(\mathbb {P}\)-a.s.
In full generality, the filter \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\) does not admit closed-form solutions, and one would need numerical methods to estimate it. Nonetheless, we can still navigate toward an analytical setup where explicit computations can be hoped for. As a start, we choose \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) to be Markov with respect to \(\{\mathcal {F}_t^{\varvec{X}}\}\), where \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) are mutually independent across \(i=1,\ldots ,n\). In addition, we ask that \(\mathbb {P}(\varvec{P}^{(i)}_t = \varvec{0}, \varvec{\xi }_t= \varvec{0})=0\) for all \(t\in \mathbb {T}\) and \(i=1,\ldots ,n\). We shall also work with \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) such that \(K^{(i,i)}_{s,t}= K_{s,t}\) for all \(s,t\in \mathbb {T}\) and \(i=1,\ldots ,n\) for parsimony, which can be relaxed.
Lemma 3.6
Let \(\tau ^{(i)}_t = 0 \vee \sup \{ u : ||\varvec{P}^{(i)}_u||_{{\mathcal {L}}^2} \ne 0, u\in [0,t]\}\) for \(t\in \mathbb {T}\) and \(i=1,\ldots ,n\). If \(\varvec{P}^{(i)}_u\in \{\varvec{p}^{(i)},\varvec{0}\}\) for some \(\varvec{p}^{(i)}\ne \varvec{0}\in \mathbb {R}^n\) for all \(u\le t\), and if \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) satisfies time-changed Markov invariance under the action of each \(\varvec{p}^{(i)}\), then
where \(\{\bar{\varvec{\xi }}_{u}\}_{u\le t}\) given \(\{\varvec{\hat{P}}_u\}_{u\le t}\) is a GRB with \(\bar{\varvec{\xi }}_{\varvec{\tau }_t}=[\bar{\xi }^{(1)}_{\tau ^{(1)}_t},\ldots ,\bar{\xi }^{(n)}_{\tau ^{(n)}_t}]^{\top }\) and \(\varvec{\tau }_t=[\tau ^{(1)}_t,\ldots ,\tau ^{(n)}_t]^{\top }\), such that
with \(\bar{\varvec{X}}_{u}\) satisfying \(\varvec{P}_u\otimes \bar{\varvec{X}}_{u} = \varvec{\hat{P}}_u\varvec{X}_u\) for \(u\le t\), where \(\{\varvec{P}_t\}_{t\in \mathbb {T}}\in \mathbb {R}^{n}\) is
and \(\bar{Z}^{(i)} = ||\varvec{p}^{(i)}||_{{\mathcal {L}}^2}^{-1}(\varvec{p}^{(i)})^{\top }\varvec{Z}\) when \(||\varvec{P}^{(i)}_u||_{{\mathcal {L}}^2}\ne 0\) and \(\bar{Z}^{(i)} = 0\) otherwise.
Proof
See Appendix. \(\square \)
Remark 3.7
If \(\varvec{Z}=Z\varvec{1}\), and each element of \(\varvec{p}^{(i)}\) is positive, then
so that signal modulation acts as the ratio of the \({\mathcal {L}}^1\) and \({\mathcal {L}}^2\) norms of \(\varvec{p}^{(i)}\) for any \(i=1\ldots ,n\).
As of this point, without loss of generality, \(\varvec{p}^{(i)}\) is normalized such that \(\langle \varvec{p}^{(i)}\varvec{p}^{(i)} \rangle =1\) for all \(i=1\ldots ,n\), unless stated otherwise. Thus, \(||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}=1 \Rightarrow \tau ^{(i)}_t = t\) and \(||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}=0 \Rightarrow \tau ^{(i)}_t < t\).
Remark 3.8
One can model \(\{\varvec{P}^{(i)}_u\}_{u\le t}\in \{\varvec{p}^{(i)},\varvec{0}\}\) for \(i=1,\ldots ,n\) by introducing a mutually independent stochastic process \(\{\varvec{Y}_u\}_{u\le t}\in \mathbb {R}^n\) such that
for some \({\mathcal {Y}}^{(i)}\in {\mathcal {B}}({\mathbb {R}})\). We call \(\{\varvec{Y}_u\}_{u\le t}\) an exogenous signal corrupter process.
Although Lemma 3.6 provides a significantly simplified expression, it still does not necessarily offer a closed-form solution. The reason is as follows: Note that (17) and (79) give rise to a time-changed kernel-valued process \(\{\varvec{\hat{P}}_u\varvec{\hat{\Psi }}_{\varvec{\tau }_u}\varvec{\hat{P}}^{\top }_u\}_{u\le t}\), given that
where each time-changed coordinate in (22) is as follows:
Although the kernel \(\varvec{\hat{P}}_t\varvec{\hat{\Psi }}_{\varvec{\tau }_t}\varvec{\hat{P}}^{\top }_t\) is well defined, it is singular since \(\det (\varvec{\hat{P}}_t\varvec{\hat{\Psi }}_{\varvec{\tau }_t}\varvec{\hat{P}}^{\top }_t)=0\) (unless \(\varvec{\hat{P}}_t=\varvec{\hat{I}}_t\)), which blocks us from reaching explicit expressions involving conditional density functions and may give rise to redundancy issues (i.e., superfluous information). In order to drive around this block, we need the conditional measure in (79) to accept the following factorization:
which removes any redundancy and singularity issues, and brings us to the result below.
Proposition 3.9
If \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) is Markov under affine transformations and \(\langle \varvec{p}^{(i)}\varvec{p}^{(j)} \rangle =0\) for all \(i\ne j\), then the following holds:
where the map \(h:\mathbb {R}\times \mathbb {R}\times \mathbb {T}\rightarrow \mathbb {R}_+\) above is
given that \(K^{\delta }_{\tau ^{(i)}_t,T}= K^{*}_{\tau ^{(i)}_t,T}\psi _{\tau ^{(i)}_t,\tau ^{(i)}_t}^{-1}\), for \(t\in \mathbb {T}_{-}\).
Proof
Having \(\varvec{p}^{(i)}\) mutually orthogonal for \(i=1,\ldots ,n\) and \(\varvec{P}_u\otimes \bar{\varvec{X}}_{u} = \varvec{\hat{P}}_u\varvec{X}_u\) for all \(u\le t\), by using Corollary 2.8, it follows that \(\{\bar{\varvec{X}_u}\}_{u\le t}\) satisfies the aforementioned factorization. Thus, the corresponding conditional density functions exist and the result follows from Lemma 3.6. \(\square \)
Definition 3.10
Let \(\{{\mathcal {J}}_t\}_{t\in \mathbb {T}}\) and \(\{{\mathcal {J}}^{C}_t\}_{t\in \mathbb {T}}\) be set-valued processes given by \({\mathcal {J}}_t=\{i : ||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}=1\}\), and \({\mathcal {J}}^{C}_t=\{i : ||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}=0\}\), respectively.
Since \({\mathcal {J}}_t\cup {\mathcal {J}}^{C}_t={\mathcal {I}}\) for all \(t\in \mathbb {T}\), using Proposition 3.9 and Definition 3.10, we can further decompose (24) into \({\mathcal {J}}_t\)-driven orthogonal components as follows:
given that the function-valued process \(\{\phi ^{C}_u\}_{u\le t}\) is
where \(\tau ^{(i)}_t < t\) for all \(i\in {\mathcal {J}}^{C}_t\). The \({\mathcal {J}}_t\)-decomposition in (26) will be useful in the next section when we provide dynamical SDE representations for \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\).
Example 3.11
Let \(\{\varvec{\hat{P}}_u\}_{u\le t}\) be diagonal such that \(\varvec{P}^{(i)}_u\in \{\varvec{e}^{(i)},\varvec{0}\}\) for all \(u\le t\), where \(\varvec{e}^{(i)}\in \mathbb {R}^n\) is the standard basis vector such that only the ith coordinate of \(\varvec{e}^{(i)}\) is 1 and the rest is 0, for \(i=1,\ldots ,n\). Then, we have
This is the case when information flows switch on and off without getting mixed with each other. If we further set \(\varvec{Z}=Z\varvec{1}\), \(g(\varvec{Z})=g(Z)\), and choose each \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) to be a Brownian motion, we get the setup of [34].
Remark 3.12
The piecewise-enlarged filtrations introduced in [29] used for stochastic quantum reduction can be recovered by imposing the following:
-
1.
\(\varvec{p}^{(i)}=\varvec{e}^{(i)}\) for \(i=1\ldots ,n\),
-
2.
\({\mathcal {J}}_0\ne \emptyset \) and \({\mathcal {J}}^C_0\ne \emptyset \),
-
3.
\(\{|{\mathcal {J}}_{t}|\}_{t\in \mathbb {T}}\) is non-decreasing, where |.| is the cardinality.
Then, \(\{\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\}_{t\in \mathbb {T}}\) is an example of a piecewise-enlarged filtration of \(\{\mathcal {F}_t^{{\mathcal {J}}_0}\}_{t\in \mathbb {T}}\), where we defined \(\mathcal {F}_t^{{\mathcal {J}}_0}=\sigma (\{\xi ^{(i)}_u\}_{0\le u \le t} : i\in {\mathcal {J}}_0)\).
We conclude this section by extending \(\{\varvec{P}^{(i)}_u\}_{u\le t}\in \{\varvec{p}^{(i)},\varvec{0}\}\) to \(\{\varvec{P}^{(i)}_t\}_{t\in \mathbb {T}}\), where \(\mathbb {T}_{-} = \bigcup _{d=1}^m \mathbb {T}^d\) for some \(m\in \mathbb {N}_+\) such that \(\{\varvec{P}^{(i)}_t\}_{t\in \mathbb {T}^d}\in \{\varvec{p}^{(d,i)},\varvec{0}\}\) for \(i=1,\ldots ,n\) and \(d=1,\ldots ,m\). This allows us to augment Proposition 3.9 toward a broader class of \(\{\hat{\varvec{P}}_t\}_{t\in \mathbb {T}}\), and consequently, toward filters taking more general forms. First, we slightly extend Definition 2.6, where we say \(\{\varvec{X}^d_t\}_{t\in \mathbb {T}}\) for \(d=1,\ldots ,m\) satisfies the joint time-changed Markov property at \(T\in \mathbb {T}\), if
for any \(0 \le t^{(d,i)}_{1}< \cdots< t^{(d,i)}_{k_{d,i}} < T\), for \(i=1,\ldots ,n\), \(d=1,\ldots ,m\). For below, \(\mathbb {T}^d=[T_{d-1},T_d)\) for \(d=1,\ldots ,m\), where \(T_0=0\) and \(T_m=T\). We also define the set-valued process \(\{{\mathcal {T}}_t\}_{t\in \mathbb {T}_{-}}\) where \({\mathcal {T}}_t=\{d: T_{d-1} \le t\}\) for any \(t\in \mathbb {T}_{-}\), that tracks the time periods.
Proposition 3.13
Let \(\tau ^{(d,i)}_t = 0 \vee \sup \{ u : ||\varvec{P}^{(i)}_u||_{{\mathcal {L}}^2} =1, u\in [T_{d-1},t]\}\) for \(t\in \mathbb {T}^d\) and \(i=1,\ldots ,n\) and \(d=1,\ldots ,m\), where \(\varvec{P}^{(i)}_u\in \{\varvec{p}^{(d,i)},\varvec{0}\}\) for some normalized \(\varvec{p}^{(d,i)}\ne \varvec{0}\in \mathbb {R}^n\) for all \(u\in \mathbb {T}^d\). Given that \(\varvec{P}_{u^d}\otimes \bar{\varvec{X}}^d_{u^d} = \varvec{\hat{P}}_{u^d}\varvec{X}_{u^d}\), if \([\bar{\varvec{X}}^1_{u^1},\ldots ,\bar{\varvec{X}}^m_{u^m}]\) satisfy property (29) under each \(\varvec{p}^{(d,i)}\), then
where \(\{\bar{\varvec{\xi }}^d_{t}\}_{t\in \mathbb {T}^d}\) given \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}^d}\) is a GRB with \(\bar{\varvec{\xi }}^d_{\varvec{\tau }^d_t}=[\bar{\xi }^{(d,1)}_{\tau ^{(d,1)}_t},\ldots ,\bar{\xi }^{(d,n)}_{\tau ^{(d,n)}_t}]^{\top }\) and \(\varvec{\tau }^d_t=[\tau ^{(d,1)}_t,\ldots ,\tau ^{(d,n)}_t]^{\top }\), such that
with \(\bar{Z}^{(d,i)} = (\varvec{p}^{(d,i)})^{\top }\varvec{Z}\) when \(||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}=1\) and \(\bar{Z}^{(d,i)} = 0\) otherwise, for \(t\in \mathbb {T}^d\).
Proof
Following the same logic as in Lemma 3.6, we have
Then, using a generalized version of Lemma 2.7 extended for the joint property in (29)—which we shall omit to avoid repetition—we get the following:
and the rest of the proof follows in a similar way as in Lemma 3.6. \(\square \)
For the next statement, we define \(\{\bar{\varvec{\phi }}^{(i)}_{\varvec{\tau }_t^{(i)}}\}_{t\in \mathbb {T}}\) as the \(\{|{\mathcal {T}}_t|\}_{t\in \mathbb {T}}\times 1\) vector-valued process
Note that for each (i), \(\{\bar{\varvec{\phi }}^{(i)}_{\varvec{\tau }_t^{(i)}}\}_{t\in \mathbb {T}}\) is a vector-valued process that is non-decreasing in its dimension. Similarly, we define the \(\{|{\mathcal {T}}_t|\}_{t\in \mathbb {T}}\times 1\) vector \(\{\bar{\varvec{Z}}^{(i)}_t\}_{t\in \mathbb {T}}\) by
Proposition 3.14
Keep the setup in Proposition 3.13. If \(\langle \varvec{p}^{(a,i)}\varvec{p}^{(b,j)} \rangle =0\) for all \(i\ne j\), then
where the map \(H:\mathbb {R}^{|{\mathcal {T}}_t|}\times \mathbb {R}^{|{\mathcal {T}}_t|}\times \mathbb {T}\rightarrow \mathbb {R}_+\) is given by
with \(\varvec{\mu }_{\varvec{\tau }_t^{(i)}}(\bar{\varvec{z}}_t)=K_{T,T}^{-1}[K_{\tau ^{(1,i)}_t,T}\bar{z}^{(1,i)},\ldots ,K_{\tau ^{(|{\mathcal {T}}_t|,i)}_t,T}\bar{z}^{(|{\mathcal {T}}_t|,i)}]^{\top }\), and the \(|{\mathcal {T}}_t|\times |{\mathcal {T}}_t|\) kernel matrix
where \(p^{(i)}_{(j,k)}=\varvec{p}^{(j,i)}(\varvec{p}^{(k,i)})^{\top }\), for \(i=1\dots ,n\) and \(t\in \mathbb {T}_{-}\).
Proof
The proof follows similar to Proposition 3.9, since \(\langle \varvec{p}^{(a,i)}\varvec{p}^{(b,j)} \rangle =0\) for all \(i\ne j\) implies
for \(i=1,\ldots ,n\), \(d=1,\ldots ,m\). When \(\varvec{\hat{\Gamma }}^{-1}_{\varvec{\tau }_t^{(i)}}\) exist for all \(i=1,\ldots ,n\), the conditional densities exist, and the result follows from Proposition 3.13. \(\square \)
Remark 3.15
One can consider multiple signal vectors \(\varvec{Z}^{\alpha }\in {\mathcal {L}}^2(\Omega ,\mathcal {F},\mathbb {P})\) with \((\mathbb {R}^{n_\alpha },{\mathcal {B}}(\mathbb {R}^{n_\alpha }))\), noisy observation processes \(\{\varvec{\xi }^{\alpha }_t\}_{t\in \mathbb {T}}\in \mathbb {R}^{n_\alpha }\) and \(\{\varvec{\hat{P}}^\alpha _t\}_{t\in \mathbb {T}}\in \mathbb {R}^{n_\alpha \times n_\alpha }\) from \({\mathcal {P}}(n_\alpha )\) for \(\alpha =1\ldots ,m\) for some \(m\in \mathbb {N}_+\)—here, \(n_\alpha \) highlights that the dimensions may vary with respect to \(\alpha \). If all are mutually independent across \(\alpha =1\ldots ,m\), then we simply have
given that \(\pi ^{\varvec{\hat{P}}^\alpha }_{t^{\alpha }}(\mathrm {d}\varvec{z}^\alpha )=\mathbb {P}(\varvec{Z}^\alpha \in \mathrm {d}\varvec{z}^\alpha \, | \, \mathcal {F}_{t^{\alpha }}^{\varvec{\hat{P}}^\alpha ,\varvec{\xi }^\alpha })\), where each can be calculated using Proposition 3.14 under the given constraints, by setting \(g=1_{(.)}\) and applying their own time changes.
3.1 Dynamical Representations
We keep the conditions in Proposition 3.9 to provide an SDE representation for \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\), and choose \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) with continuous paths, which already provide fairly involved expressions for the dynamics of \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\). Nonetheless, the result below can be extended for Proposition 3.14 as well as for \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) with general càdlàg paths. We introduce the \(\{\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\}\)-adapted counting processes \(\{C_t\}_{t\in \mathbb {T}}\) and \(\{\delta _t\}_{t\in \mathbb {T}}\) as follows:
Hence, \({\mathcal {K}}_t \ne \emptyset \) if and only if when at least one coordinate of the vector in (19) that equals zero at \(t-\in \mathbb {T}\) becomes nonzero at \(t\in \mathbb {T}\). We also have \(C_t\ge \sum _{u\le t}\delta _u\) for any \(t\in \mathbb {T}\). Finally, we define \(\Lambda _k = \inf \{t:C_t=k\}\) with \(\Lambda _0=0\).
Lemma 3.16
If each \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) is an \((\mathcal {F}_t^{X^{(i)}},\mathbb {P})\)-semimartingale with quadratic variation \(\{Q_t^{(i)}\}_{t\in \mathbb {T}}\) and \(K\in {\mathcal {C}}^{1}(\mathbb {T})\), there exist \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-semimartingales \(\{S^{(i,k-1)}_t(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t < \Lambda _k}\) for \(k=1,\ldots ,C_t+1\) with quadratic variations \(\{Q^{(i)}_t\}_{t\in \mathbb {T}}\) for any \(\varvec{z}\in \mathbb {R}^n\) and \(\varvec{p}^{(i)}\in \mathbb {R}^n\) for \(i=1,\ldots ,n\), such that
for \(t\in \mathbb {T}_{-}\), where \(\sigma _t(\bar{\varvec{\xi }},\bar{z}^{(i)}) = K^{\delta }_{t,T}(\bar{z}^{(i)}-\mathbb {E}[\bar{Z}^{(i)}\,|\, \bar{\varvec{\xi }}_{\varvec{\tau }_t}, \, \mathcal {F}_t^{\varvec{\hat{P}}}])\).
Proof
See Appendix. \(\square \)
Proposition 3.17
Let \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) be an \((\mathcal {F}_t^{X^{(i)}},\mathbb {P})\)-martingale, where \(Q^{(i)}\in {\mathcal {C}}^{1}(\mathbb {T})\) for \(i=1,\ldots ,n\). Then, \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}_{-}}\) satisfies
for \(t\in \mathbb {T}_{-}\), given that \(\tilde{\Theta }_t(\bar{\varvec{\xi }},\bar{Z}^{(i)})=[(Q^{(i)}_t)^{'}]^{1/2}(Q^{(i)}_T-Q^{(i)}_t)^{-1}\Theta _t(\bar{\varvec{\xi }},\bar{Z}^{(i)})\) where \(\Theta _t(\bar{\varvec{\xi }},\bar{Z}^{(i)}) = Cov [g(\varvec{Z}),\bar{Z}^{(i)}\,|\, \bar{\varvec{\xi }}_{\varvec{\tau }_t}, \, \mathcal {F}_t^{\varvec{\hat{P}}}]\), and \(\{W_t^{(i,k-1)}\}_{\Lambda _{k-1}\le t < \Lambda _k}\) are \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-Brownian motions.
Proof
Since each \(\{S_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) from Lemma 3.16 is a continuous \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-semimartingale, we can decompose it as
where \(\{W_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) is a continuous \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-local martingale, and the adapted continuous process \(\{A_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) has bounded variation. Now start with \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) to be an \((\mathcal {F}_t^{X^{(i)}},\mathbb {P})\)-Brownian motion. Since \(\{A_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) has zero quadratic variation in any case and \(\mathrm {d}Q_t^{(i)}=\mathrm {d}t\) for any \(i\in {\mathcal {I}}\), we have \(\mathrm {d}\langle W_t^{(i,k-1)}(\bar{z}^{(i)}), W_t^{(i,k-1)}(\bar{z}^{(i)})\rangle = \mathrm {d}t\). Thus, \(\{W_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) must be an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-Brownian motion by Lévy characterization theorem. Hence, we have
for \(\Lambda _{k-1}\le t < \Lambda _{k}\) as a Doléans–Dade exponential, where \(\{\tilde{{\mathcal {E}}}^{(k-1)}_t(\bar{\varvec{\xi }},\varvec{z})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) is
given that \(\rho ^{(ij)}_t\) is the correlation factor across \(\{W_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\). Since \(\{\pi ^{\varvec{\hat{P}}}_t\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) is an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-martingale, we have
Since \(\{W_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) is a Brownian motion, and (45) must hold, \(\{A_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) must be a constant \(\mathbb {P}\)-a.s. Hence, there exists an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-Brownian motion \(\{W_t^{(i,k-1)}\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) such that we can replace \(\mathrm {d}S_t^{(i,k-1)}(\bar{z}^{(i)}) = \mathrm {d}W_t^{(i,k-1)}\). When \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) is a more general continuous \((\mathcal {F}_t^{X^{(i)}},\mathbb {P})\)-martingale, Dambis–Dubins–Schwarz theorem applies through time-changed Brownian motions, and thus, we get
where \(\tilde{\sigma }_t(\bar{\varvec{\xi }},\bar{z}^{(i)})=[(Q^{(i)}_t)^{'}]^{1/2}\sigma _t(\bar{\varvec{\xi }},\bar{z}^{(i)})\) and \(\{W_t^{(i,k-1)}\}_{\Lambda _{k-1}\le t < \Lambda _k}\) are \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-Brownian motions. Finally, (41) follows from Lebesgue integration. \(\square \)
Note that the SDE in Proposition 3.17 is a generalization of the form in Proposition 1.1—due to the conditions imposed as part of Proposition 3.9, one can start from an already modulated GRB \(\{\bar{\varvec{\xi }}_{t}\}_{t\in \mathbb {T}}\) from the outset, and switch the coordinates on and off.
Corollary 3.18
Keep the setup in Proposition 3.17. Then,
for \(t\in \mathbb {T}_{-}\), where \({\mathcal {E}}^{(k-1)}_{\Lambda _k}(\bar{\varvec{\xi }},\varvec{z})\) for \(k=1,\ldots ,C_t\) is given by
and where \(\mathcal {\hat{E}}^{(C_t)}_{t}(\bar{\varvec{\xi }},\varvec{z})\) is the same as in (46) where \((k-1)\) is replaced by \(C_t\), \(\Lambda _k-\) is replaced by t, and the jump term \(\mathbbm {1}\{{\mathcal {K}}_{\Lambda _k} \ne \emptyset \}=0\).
Note that \(\{Z^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\) jumps if and only if a previously zero coordinate of \(\{\varvec{P}_t\}_{t\in \mathbb {T}}\) in (19) becomes nonzero (not when a nonzero coordinate becomes zero). Also, if \({\mathcal {J}}_{\Lambda _{k-1}}=\emptyset \), that is if \(\varvec{P}_{\Lambda _{k-1}}=\varvec{0}\) in (19) and all information is lost at time \(\Lambda _{k-1}\) for some \(k=1\ldots ,C_t\), then \(\{Z^{\varvec{\hat{P}}}_t\}_{\Lambda _{k-1} \le t < \Lambda _{k}} = Z^{\varvec{\hat{P}}}_{\Lambda _{k-1}}\).
Remark 3.19
From Proposition 3.4, \(Z^{\varvec{\hat{P}}}_t = \mathbb {E}[ Z_t \,|\, \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}]\), and hence, using Proposition 2.14,
for \(t\in \mathbb {T}_{-}\), given that \(\tilde{\Theta }^*_t(\varvec{\xi },Z^{(i)}) = [(Q^{(i)}_t)^{'}]^{1/2}(Q^{(i)}_T-Q^{(i)}_t)^{-1}\Theta ^*_t(\varvec{\xi },Z^{(i)})\) where \(\Theta ^*_t(\varvec{\xi },Z^{(i)}) = Cov [g(\varvec{Z}),Z^{(i)}\,|\, \varvec{\xi }_t]\) and \(\{W_t^{(i)}\}_{t\in \mathbb {T}_{-}}\) are \((\mathcal {F}_t^{\varvec{\xi }},\mathbb {P})\)-Brownian motions.
From Proposition 3.17 and Remark 3.19, if we take each \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\) to be an \((\mathcal {F}_t^{X^{(i)}},\mathbb {P})\)-martingale for \(i=1,\ldots ,n\), we see that the following holds:
where we set \({\mathcal {J}}_{0}\ne \emptyset \) and \(\{|{\mathcal {J}}_{t}|\}_{t\in \mathbb {T}}\) to be non-decreasing. The identity above links \((\mathcal {F}_t^{\varvec{\xi }},\mathbb {P})\)-Brownian motions with \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-Brownian motions, as well as \(\{\Theta ^*_t(\varvec{\xi },Z^{(i)})\}_{t\in \mathbb {T}_{-}}\) with \(\{\Theta _t(\bar{\varvec{\xi }},\bar{Z}^{(i)})\}_{t\in \mathbb {T}_{-}}\).
Proposition 3.20
For \({\mathcal {J}}_{\Lambda _{k-1}}\ne \emptyset \), \(\{\Theta _t(\bar{\varvec{\xi }},\bar{Z}^{(i)})\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) is an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-supermartingale.
Proof
Note that \(\{\bar{Z}^{(i)}_t\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) defined by \(\bar{Z}^{(i)}_t=\mathbb {E}[g(\varvec{Z})\bar{Z}^{(i)}) \, | \, \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}]\) is an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-martingale. Also, \(\{\bar{V}^{(i)}_t\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) defined by \(\bar{V}^{(i)}_t=Z^{\varvec{\hat{P}}}_t\mathbb {E}[\bar{Z}^{(i)}) \, | \, \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}]\) is an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-submartingale by using Proposition 3.17 and applying Itô product rule, where non-negative drift terms arise due to cross-products of the \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-Brownian motions. Thus, using Doob–Meyer theorem, we can uniquely decompose \(\{\bar{V}^{(i)}_t\}_{\Lambda _{k-1}\le t <\Lambda _{k}}\) into an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-martingale plus a non-decreasing process. It follows that \(\mathbb {E}[\Theta _t(\bar{\varvec{\xi }},\bar{Z}^{(i)}) \, | \, \mathcal {F}_s^{\varvec{\hat{P}},\varvec{\xi }}] \le \Theta _s(\bar{\varvec{\xi }},\bar{Z}^{(i)})\) for any \(\Lambda _{k-1}\le s< t <\Lambda _{k}\). \(\square \)
3.2 From Multi-Order Modulation to Information Discrepancy
We can employ irrevocably modulated filtrations to model sequential multi-order modulation, where the flow of information is exposed to multiple modifications in some order before reaching agents. Since \({\mathcal {P}}(n)\) is closed under multiplication, we can choose \(\varvec{\hat{A}}_t\in {\mathcal {P}}(n)\) and \(\varvec{\hat{B}}_t\in {\mathcal {P}}(n)\), and consider \(\varvec{\hat{A}}_t\varvec{\hat{B}}_t\in {\mathcal {P}}(n)\) and \(\varvec{\hat{B}}_t\varvec{\hat{A}}_t\in {\mathcal {P}}(n)\), where
where \(\varvec{\hat{0}}\in {\mathcal {P}}(n)\) is the zero-matrix, and \(L:\mathbb {R}^{n\times n}\times \mathbb {R}^{n\times n}\rightarrow \mathbb {R}^{n\times n}\) is the Lie bracket such that \(L[\varvec{\hat{A}}_t,\varvec{\hat{B}}_t]=\varvec{\hat{A}}_t\varvec{\hat{B}}_t-\varvec{\hat{B}}_t\varvec{\hat{A}}_t\). Here, we have the filtrations \(\{\mathcal {F}_t^{\varvec{\hat{A}\hat{B}},\varvec{\xi }}\}_{t\in \mathbb {T}}\) and \(\{\mathcal {F}_t^{\varvec{\hat{B}\hat{A}},\varvec{\xi }}\}_{t\in \mathbb {T}}\) given by the following:
For the non-commutative case in (48), i) neither \(\varvec{\hat{A}}_t\) or \(\varvec{\hat{B}}_t\) can be \(\varvec{\hat{I}}_t\in {\mathcal {P}}(n)\) or \(\varvec{\hat{0}}\in {\mathcal {P}}(n)\), and ii) neither \(\varvec{\hat{B}}_t\,\varvec{\xi }_t\) can be recovered from \(\varvec{\hat{A}}_t\varvec{\hat{B}}_t\,\varvec{\xi }_t\) nor \(\varvec{\hat{A}}_t\,\varvec{\xi }_t\) can be recovered from \(\varvec{\hat{B}}_t\varvec{\hat{A}}_t\,\varvec{\xi }_t\). Note that the singularity of \(\{\varvec{\hat{A}}_t\}_{t\in \mathbb {T}}\) and \(\{\varvec{\hat{B}}_t\}_{t\in \mathbb {T}}\) is crucial for (48)—if \(\{\varvec{\hat{A}}_t\}_{t\in \mathbb {T}}\) and \(\{\varvec{\hat{B}}_t\}_{t\in \mathbb {T}}\) would instead be non-singular, then the non-commutativity would not make any difference for signal inference, since we would have \(\mathbb {E}[\left. g(\varvec{Z}) \,\right| \, \mathcal {F}_t^{\varvec{\hat{A}\hat{B}},\varvec{\xi }}]=\mathbb {E}[\left. g(\varvec{Z}) \,\right| \, \mathcal {F}_t^{\varvec{\hat{B}\hat{A}},\varvec{\xi }}]=\mathbb {E}[\left. g(\varvec{Z}) \,\right| \, \mathcal {F}_t^{\varvec{\xi }}]\).
In order to quantify the information discrepancy caused by non-commutative sequences of modulation, we shall introduce a Lie-type operator acting on the space of \(\sigma \)-algebras.
Definition 3.21
Let \({\mathbb {F}}(n,\varvec{\xi })\) be the space of all irrevocably modulated \(\sigma \)-algebras \(\mathcal {F}^{\varvec{\hat{P}},\varvec{\xi }}\) for all possible \(\varvec{\hat{P}}\in {\mathcal {P}}(n)\) for some \(\varvec{\xi }\). Then, \(L^*:{\mathbb {F}}(n,\varvec{\xi })\otimes {\mathbb {F}}(n,\varvec{\xi })\rightarrow {\mathbb {L}}(n,\varvec{\xi })\) is such that
where \({\mathbb {L}}(n,\varvec{\xi })\) is the space of all \(\sigma \)-algebras of the form in (52), for any \(\varvec{\hat{P}}^\alpha \in {\mathcal {P}}(n)\) and \(\varvec{\hat{P}}^\beta \in {\mathcal {P}}(n)\).
We shall sometimes use \(\varvec{\hat{L}}^{\alpha ,\beta }=L[\varvec{\hat{P}}^\alpha ,\varvec{\hat{P}}^\beta ]\) for notational convenience. Note that if \(\varvec{\hat{L}}^{\alpha ,\beta }=\varvec{\hat{0}}\), then \(L^*[\mathcal {F}^{\varvec{\hat{P}}^\alpha ,\varvec{\xi }},\mathcal {F}^{\varvec{\hat{P}}^\beta ,\varvec{\xi }}]=\sigma (\varvec{\hat{P}}^\alpha , \varvec{\hat{P}}^\beta )\). Although \(\varvec{\hat{P}}^\alpha \varvec{\hat{P}}^\beta \in {\mathcal {P}}(n)\) due to closure under multiplication, \({\mathcal {P}}(n)\) is not closed under the action of L, and hence, \(\varvec{\hat{L}}^{\alpha ,\beta }\) is not necessarily an element of \({\mathcal {P}}(n)\).
Remark 3.22
Although \({\mathbb {L}}(n,\varvec{\xi })\) includes irrevocably modulated \(\sigma \)-algebras, it also includes \(\sigma \)-algebras such as \(\sigma (\varvec{\xi }, \varvec{\hat{P}}^\alpha , \varvec{\hat{P}}^\beta )\) when \((\varvec{\hat{L}}^{\alpha ,\beta })^{-1}\) exists and \(\varvec{\xi }\) can be recovered from \(\sigma ( \varvec{\hat{L}}^{\alpha ,\beta }\varvec{\xi }, \varvec{\hat{P}}^\alpha , \varvec{\hat{P}}^\beta )\).
We denote \({\mathcal {P}}^C(n)\) as the space of \(\mathbb {R}^{n\times n}\) non-singular matrices, so that \({\mathcal {P}}(n) \bigcap {\mathcal {P}}^C(n)=\{\varvec{\hat{I}}\}\). We let \({\mathcal {Q}}(n)\) be the space of \(\mathbb {R}^{n\times n}\) matrices formed via \(L[\varvec{\hat{P}}^\alpha ,\varvec{\hat{P}}^\beta ]\) for any \(\varvec{\hat{P}}^\alpha \in {\mathcal {P}}(n)\) and \(\varvec{\hat{P}}^\beta \in {\mathcal {P}}(n)\). Any \(\{\varvec{\hat{L}}^{\alpha ,\beta }_t\}_{t\in \mathbb {T}}\in {\mathcal {Q}}(n)\) is \(\{\mathcal {F}_{t}\}\)-adapted and jumps whenever \(\{\varvec{\hat{P}}^\alpha _t\}_{t\in \mathbb {T}}\in {\mathcal {P}}(n)\) or \(\{\varvec{\hat{P}}^\beta _t\}_{t\in \mathbb {T}}\in {\mathcal {P}}(n)\) jumps. Note that \(\{\varvec{\hat{L}}^{\alpha ,\beta }_t\}_{t\in \mathbb {T}}\) can jump between \({\mathcal {P}}(n)\) and \({\mathcal {P}}^C(n)\).
As an application, we can consider systems with information asymmetry, which is a fruitful research area in mathematical economics and game theory. We can consider regular agents \(A^\alpha \) and \(A^\beta \) having access to either \(\mathcal {F}^{\varvec{\hat{P}}^\alpha ,\varvec{\xi }}\) or \(\mathcal {F}^{\varvec{\hat{P}}^\beta ,\varvec{\xi }}\), respectively (but not to both), and there may even exist another group of agents \(A^*\) who have access to \(L^*[\mathcal {F}^{\varvec{\hat{P}}^\alpha ,\varvec{\xi }},\mathcal {F}^{\varvec{\hat{P}}^\beta ,\varvec{\xi }}]\)—hence, whenever \(\varvec{\hat{L}}^{\alpha ,\beta }\in {\mathcal {P}}^C(n)\), \(A^*\) can recover \(\varvec{\xi }\) to their advantage with better inference capability. Then, using
we can provide a dynamic quantification of the information asymmetry between agents \(A^\alpha \) and \(A^\beta \). In doing so, we associate \(Z^{\varvec{\hat{A}\hat{B}}}_t = \mathbb {E}[\left. g(\varvec{Z}) \,\right| \, \mathcal {F}_t^{\varvec{\hat{A}\hat{B}},\varvec{\xi }}]\) to agent \(A^\alpha \), \(Z^{\varvec{\hat{B}\hat{A}}}_t = \mathbb {E}[\left. g(\varvec{Z}) \,\right| \, \mathcal {F}_t^{\varvec{\hat{B}\hat{A}},\varvec{\xi }}]\) to agent \(A^\beta \), and define
For notational convenience, we simply write \(\varvec{\hat{L}}_t=L[\varvec{\hat{A}}_t\varvec{\hat{B}}_t,\varvec{\hat{B}}_t\varvec{\hat{A}}_t]\) without additional superscripts. From \(\varvec{\hat{L}}_t\), we can construct \(\varvec{L}_t\in \mathbb {R}^{n}\) by
where \(\varvec{L}_t^{(i)}\in \mathbb {R}^n\) is the \(n\times 1\) vector formed from the ith row of \(\varvec{\hat{L}}_t\). Let \(\bar{Z}^{*,(i)}_t= ||\varvec{L}^{(i)}_t||_{{\mathcal {L}}^2}^{-1}(\varvec{L}^{(i)}_t)^{\top }\varvec{Z}\) when \(||\varvec{L}^{(i)}_t||_{{\mathcal {L}}^2}\ne 0\) and \(\bar{Z}^{*,(i)}_t=0\) otherwise, and accordingly, define \(\{\bar{\xi }^{*,(i)}_t\}_{t\in \mathbb {T}}\) for \(i=1,\ldots ,n\) as
such that \(\varvec{L}_t\otimes \bar{\varvec{X}}^*_{t} = \varvec{\hat{L}}_t\varvec{X}_t\) for \(t\in \mathbb {T}\). Also, let \({\mathcal {J}}^*_t=\{i : ||\varvec{L}^{(i)}_t||_{{\mathcal {L}}^2} \ne 0\}\) for the statement below.
Corollary 3.23
Let \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) be Markov with mutually independent coordinates. If \(\{\varvec{\hat{L}}_u\}_{u\le t}\in {\mathcal {P}}(n)\), then if \(\varvec{L}^{(i)}_u\in \{\varvec{l}^{(i)},\varvec{0}\}\) for \(\varvec{l}^{(i)}\ne \varvec{0}\in \mathbb {R}^n\) for all \(u \le t\), such that \(\langle \varvec{l}^{(i)}\varvec{l}^{(j)} \rangle =0\) for all \(i\ne j\) and \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) satisfies time-changed Markov invariance under the action of each \(\varvec{l}^{(i)}\),
If \(\varvec{\hat{L}}_t\in {\mathcal {P}}^C(n)\) at any \(t\in \mathbb {T}\), then \({\mathcal {J}}^*_t={\mathcal {I}}\), \(h(\bar{z}^{*,(i)},\bar{\xi }^{*,(i)}_t,t)=h(z^{(i)},\xi ^{(i)}_t,t)\) and \(\phi ^{C}_t(\varvec{\bar{z}}^*)=1\) in (57).
For SDE representations, define \(\tau ^{*,(i)}_t = 0 \vee \sup \{ u : ||\varvec{L}^{(i)}_u||_{{\mathcal {L}}^2} \ne 0, u\in [0,t]\}\) for \(t\in \mathbb {T}\), \(C^*_t = \sum _{u\le t} \mathbbm {1}\{\varvec{\hat{L}}_{u}\ne \varvec{\hat{L}}_{u-}\}\), \(\delta _t = \mathbbm {1}\{{\mathcal {J}}^*_t \backslash {\mathcal {J}}^*_{t-} \ne \emptyset \}\), and finally, \(\Lambda ^*_k = \inf \{t:C^*_t=k\}\) with \(\Lambda ^*_0=0\); the dynamics would then involve \((L^*[\mathcal {F}^{\varvec{\hat{P}}^\alpha ,\varvec{\xi }}_t,\mathcal {F}^{\varvec{\hat{P}}^\beta ,\varvec{\xi }}_t],\mathbb {P})\)-Brownian motions.
One can surely propose alternative ways to quantify information discrepancy—e.g., via f-divergences (see [5, 20]). In this spirit, if \(\pi ^{\varvec{\hat{A}\hat{B}}}_t(\mathrm {d}\varvec{z})= p^{\varvec{\hat{A}\hat{B}}}_t(\varvec{z})\mathrm {d}\varvec{z}\) and \(\pi ^{\varvec{\hat{B}\hat{A}}}_t(\mathrm {d}\varvec{z})= p^{\varvec{\hat{B}\hat{A}}}_t(\varvec{z})\mathrm {d}\varvec{z}\), the symmetric Kullback–Leibler divergence
can be used, which defines a metric-valued process. On the other hand, \(\{p^{L^*}_t(\mathrm {d}\varvec{z})\}_{t\in \mathbb {T}}\) in (54) is a measure-valued process, and it can be used to model inference dynamics of the agent group \(A^*\).
3.3 From Random Graphs to Pure Noise Scenarios
Let \(\{G(v,e_t)\}_{t\in \mathbb {T}}\) be an undirected graph-valued stochastic process with n-vertices v and randomly evolving edges \(\{e_t\}_{t\in \mathbb {T}}\). Then, there exists a Laplacian matrix \(\varvec{\hat{G}}_t\in \mathbb {R}^{n\times n}\) that canonically represents \(G(v,e_t)\) at any time \(t\in \mathbb {T}\). Since each \(\varvec{G}^{(i)}_t\in \mathbb {R}^{n}\) satisfies \(\sum _{j=1}^nG_t^{(i,j)}=0\) for \(i=1,\ldots ,n\), \(\{\varvec{\hat{G}}_t\}\) must be singular, and hence, \(\varvec{\hat{G}}_t\in {\mathcal {P}}(n)\). Therefore, we can employ \(\{G(v,e_t)\}_{t\in \mathbb {T}}\) to generate what we specifically call a graph-induced irrevocably modulated filtration \(\{\mathcal {F}_t^{G(v,e),\varvec{\xi }}\}_{t\in \mathbb {T}}\) given by
Surely, \(\mathcal {F}_t^{G(v,e),\varvec{\xi }}=\mathcal {F}_t^{\varvec{\hat{G}},\varvec{\xi }}\), and we allow the notational difference to highlight it as graph-induced for the statement below, where we also write \(\pi ^{G(v,e)}_t(\mathrm {d}\varvec{z})=\mathbb {P}(\varvec{Z}\in \mathrm {d}\varvec{z} \,|\, \mathcal {F}_t^{G(v,e),\varvec{\xi }})=\mathbb {P}(\varvec{Z}\in \mathrm {d}\varvec{z} \,|\, \mathcal {F}_t^{\varvec{\hat{G}},\varvec{\xi }})\).
Proposition 3.24
Let \(\varvec{Z}=Z\varvec{1}\). Then, \(\pi ^{G(v,e)}_t(\mathrm {d}\varvec{z})=\nu (\mathrm {d}\varvec{z})\) for all \(t\in \mathbb {T}\).
Proof
Since \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) is mutually independent of \(\varvec{Z}\) and \(\sum _{j=1}^nG_t^{(i,j)}=0\) for \(i=1,\ldots ,n\), any dependence on \(\varvec{Z}\) drops from \(\{\varvec{\hat{G}}_t\,\varvec{\xi }_t\}_{t\in \mathbb {T}}\) through \(\varvec{\hat{G}}_t\varvec{1}=\varvec{0}\) using Proposition 2.2. \(\square \)
We therefore see that random graphs in the context of irrevocably modulated filtrations can lead to pure noise scenarios. We believe that studying irrevocably modulated filtrations through the eigenvalues of \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\)—e.g., via the spectral gap of \(\{G(v,e_t)\}_{t\in \mathbb {T}}\)—may provide further insight into quantifying information shrinkage, of which we leave details for future.
3.4 From Spectral Radius to Information Decay
Using irrevocably modulated filtrations, one can also construct systems that exhibit gradual decaying of the impact of having additional information sources as time progresses. As an example, allow \(\mathbb {T}_{-} = \bigcup _{d=1}^m \mathbb {T}^d\) for some \(m\in \mathbb {N}_+\), and choose \(\{\varvec{P}^{(i)}_t\}_{t\in \mathbb {T}^1}=\varvec{p}^{(1,i)}\) for \(i=1,\ldots ,n\) over the period \(\mathbb {T}^1\), such that \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}^1}\) is irreducible with each entry non-negative. Using Perron–Frobenius theorem, \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}^1}\) has a positive eigenvalue \(\lambda ^*\) that is its spectral radius. Now, we ask \(\lambda ^* < 1\), and for any \(\mathbb {T}^d\) for \(d=2,\ldots ,m\), let \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}^d}\) be such that \(\varvec{\hat{P}}_t\) over \(t\in \mathbb {T}^d\) is given by \(\{(\varvec{\hat{P}}_t)^d\}_{t\in \mathbb {T}^1}\). Thus, having the spectral radius \(\lambda ^* < 1\), the decay of \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) is controlled by \((\lambda ^*)^d\) as time progress through \(\mathbb {T}^d\) from \(d=1\) into \(d=m\). Accordingly, scenarios would arise where each coordinate of \(\{\bar{\varvec{\xi }}^d_{t}\}_{t\in \mathbb {T}^d}\), as in Proposition 3.13, converges toward each other through \(\mathbb {T}^d\) from \(d=1\) into \(d=m\), making each information source more and more indistinguishable as time evolves.
4 Conclusion
We develop a probabilistic information framework that embeds filtrations generated by non-invertible matrix-valued modulators acting on noisy observation processes. We produce dynamical representations of conditional expectations in systems where signals from randomly changing information networks may switch-off or get irreversibly amalgamated over random time horizons. Since the main focus of this work has been the modulation aspect, we chose to maintain a signal-plus-noise structure for the observation processes without getting digressed toward what other forms information may take. GRBs provide valuable modeling flexibility and analytic tractability, but the framework can be applied to other processes, e.g., Lévy random bridges of [23], Lévy information processes of [25], randomized Markov bridges of [32] and many more—however, these classes are not necessarily closed under affine transformations and additional care may be required.
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
References
Wiener, N.: Cybernetics: Or the Control and Communication in the Animal and the Machine. Wiley, New York (1948)
Wiener, N.: The Extrapolation, Interpolation, and Smoothing of Stationary Time Series. Wiley, NewYork (1949)
Bellman, R.E.: Adaptive Control Processes. Princeton University Press, Princeton (1961)
Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Wiley, New York (1962)
Csiszar, I.: Information-type measures of difference of probability distributions and indirect observations. Stud. Sci. Math. 2, 299–318 (1967)
Davis, M.H.A.: Linear Estimation and Stochastic Control. Chapman-Hall, London (1977)
Kailath, T.: A view of three decades of linear filtering theory. IEEE Trans. Inf. Theory 20, 146–181 (1974)
Kalman, R.E.: Randomness reexamined. Model. Identif. Control 15, 141–151 (1994)
Liptser, R.S., Shiryaev, A.N.: Statistics of Random Processes, Vols. I and II, 2nd edn. Springer, Berlin (2000)
Adler, S.L., Brody, D.C., Brun, T.A., Hughston, L.P.: Martingale models for quantum state reduction. J. Phys. A Math. General 34, 8795 (2001)
Brody, D.C., Hughston, L.P.: Finite-time stochastic reduction models. J. Math. Phys. 46, 082101 (2005)
Candés, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4217 (2005)
Brody, D.C., Hughston, L.P.: Quantum noise and stochastic reduction. J. Phys. A Math. General 39, 833 (2006)
Fleming, W., Soner, M.: Controlled Markov Processes and Viscosity Solutions. Springer-Verlag, New York (2006)
Candés, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)
Candés, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)
Gashatra, D., Sottinen, T., Valkeila, E.: Gaussian Bridges, Stochastic Analysis and Applications. In: Abel Symp. Springer, Berlin (2007)
Brody, D.C., Hughston, L.P., Macrina, A.: Information-based asset pricing. Int. J. Theor. Appl. Finance 11, 107–142 (2008)
Brody, D.C., Hughston, L.P., Macrina, A.: Dam rain and cumulative gain. Proc. R. Soc. A 464, 1801–1822 (2008)
Amari, S., Nagaoka, H.: Methods of Information Geometry. American Mathematical Society, Providence (2008)
Brody, D.C., Davis, M.H.A., Friedman, R.L., Hughston, L.P.: Informed traders. Proc. R. Soc. A 465, 1103–1122 (2009)
Bain, A., Crisan, D.: Fundamentals of Stochastic Filtering. Springer, NewYork (2010)
Hoyle, E., Hughston, L.P., Macrina, A.: Lévy random bridges and the modelling of financial information. Stoch. Process. Their Appl. 121(4), 856–884 (2011)
Hoyle, E., Mengütürk, L.A.: Archimedean survival processes. J. Multivar. Anal. 115, 1–15 (2013)
Brody, D.C., Hughston, L.P., Yang, X.: Signal processing with Lévy information. Proc. R. Soc. A 469 (2013)
Candés, E.J., Strohmer, T., Voroninski, V.: Phaselift: exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math. 66(8), 1241–1274 (2013)
Candés, E.J., Fernandez-Granda, C.: Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 67(6), 906–956 (2014)
Sottinen, T., Yazigi, A.: Generalized Gaussian bridges. Stoch. Process. Their Appl. 124(9), 3084–3105 (2014)
Mengütürk, L.A.: Stochastic Schrödinger evolution over piecewise enlarged filtrations. J. Math. Phys. 57, 032106 (2016)
Mengütürk, L.A.: Gaussian Random bridges and a geometric model for information equilibrium. Phys. A Stat. Mech. Appl. 494, 465–483 (2018)
Brody, D.C.: Modelling election dynamics and the impact of disinformation. Inf. Geom. 2, 209–230 (2019)
Macrina, A., Sekine, J.: Stochastic modelling with randomized Markov bridges. Stoch. Int. J. Probab. Stoch. Process. 93, 29–55 (2019)
Sulem, A., Øksendal, B.K.: Applied Stochastic Control of Jump Diffusions. Springer, Berlin (2019)
Hoyle, E., Macrina, A., Mengütürk, L.A.: Modulated information flows in financial markets. Int. J. Theor. Appl. Finance 23, 1–35 (2020)
Hoyle, E., Mengütürk, L.A.: Generalised Liouville processes and their properties. J. Appl. Probab. 57, 1088–1110 (2020)
Mengütürk, L.A., Mengütürk, M.C.: Stochastic sequential reduction of commutative Hamiltonians. J. Math. Phys. 61, 102104 (2020)
Acknowledgements
The author is grateful to the anonymous referee for their very helpful comments and valuable suggestions that led to a significant improvement in the paper. The author would also like to thank Ed Hoyle for the stimulating discussions on the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Proof of Lemma 2.5
Proof
The direction that \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) being Markov with respect to \(\{\mathcal {F}_t^{\varvec{X}}\}\) implies \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is Markov with respect to \(\{\mathcal {F}_t^{\varvec{\xi }}\}\) is proved in [30][Proposition 2.3]. For the opposite direction, we first define
for \(0\le t_1<\ldots<t_n <t\le T\). Then, if \(\{\varvec{\xi }_t\}_{t\in \mathbb {T}}\) is Markov with respect to \(\{\mathcal {F}_t^{\varvec{\xi }}\}\), we have
We also have the following:
Since (61) and (62) are equal, it follows that \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) is Markov with respect to \(\{\mathcal {F}_t^{\varvec{X}}\}\).
\(\square \)
1.2 Proof of Lemma 2.7
Proof
Since we work over \({\mathcal {D}}(\mathbb {T},\mathbb {R}^n)\), it suffices to show
for all \(k_i \in \mathbb {N}_+\), all \(0 \le t^{(i)}_{1}< \cdots< t^{(i)}_{k_i} < T\), and all \((x^{(i)}_{1}, \ldots ,x^{(i)}_{k_i})\in {\mathbb {R}}^{k_i}\) for \(i=1,\ldots ,n\).
Using Definition 2.1, we can write
Denoting \({\mathcal {I}}=\{1,\ldots ,n\}\) and defining the following map:
and using the time-changed Markov property in (6), we have
Finally, from Definition 2.1 and (66), we get
which in turn yields (7) for any \(\varvec{B}\in {\mathcal {B}}({\mathbb {R}}^n)\). \(\square \)
1.3 Proof of Lemma 2.13
Proof
First, we define \(k^{\delta }_{t,T} = \partial K^{\delta }_{t,T} / \partial t\) and \(k^{*}_{t,T} = \partial K^{*}_{t,T} / \partial t\), where \(K^{*}_{t,T}= K_{t,T} / K_{T,T}\). From Proposition 2.2 and Lemma 2.5, we get \(\pi _t(\mathrm {d}z) = H_t(\xi _t)^{-1}h_t(\xi _t; \mathrm {d}z)\), where we defined the maps \(h:\mathbb {R}_+ \times \mathbb {R}\rightarrow \mathbb {R}_+\) and \(H:\mathbb {R}_+ \times \mathbb {R}\rightarrow \mathbb {R}_+\) as follows:
Hence, \(h\in {\mathcal {C}}^{1,2}(\mathbb {R}_+,\mathbb {R})\) and \(H\in {\mathcal {C}}^{1,2}(\mathbb {R}_+,\mathbb {R})\). Since \(\{K^{*}_{t,T}\left( Z - X_T \right) \}\) is a continuous finite variation process, using Proposition 2.12, \(\{\left\langle \xi _{t}, \xi _{t} \right\rangle \}_{t\in \mathbb {T}} = \{\left\langle X_{t}, X_{t} \right\rangle \}_{t\in \mathbb {T}}\) is the quadratic variation for the \((\mathcal {F}_t^{\xi },\mathbb {P})\)-semimartingale \(\{\xi _t\}_{t\in \mathbb {T}}\). Also, using Itô product rule, we get \(\pi _t^{-1}\mathrm {d}\pi _t = h_t^{-1}\mathrm {d}h_t - H_t^{-1}\mathrm {d}H_t + H_t^{-2}\mathrm {d}\left\langle H_t, H_t \right\rangle - (h_t H_t)^{-1}\mathrm {d}\left\langle h_t, H_t \right\rangle \). Thus, we have the following:
by using Fubini’s theorem. For the quadratic terms, we have
We gather the terms above as follows:
and write \(\pi _t^{-1}\mathrm {d}\pi _t\) compactly as
where \(\sigma _t(\xi ,z)\ne 0\) \(\mathbb {P}\)-a.s since \(\{\xi _t\}_{t\in \mathbb {T}}\) is continuous. Using Proposition 2.12, \(\{S_t(z)\}_{t\in \mathbb {T}_{-}}\) is an \((\mathcal {F}_t^{\xi },\mathbb {P})\)-semimartingale with \(\{\left\langle S_t(z), S_t(z)\right\rangle \}_{t\in \mathbb {T}_{-}}=\{Q_t\}_{t\in \mathbb {T}_{-}}\) for any \(z\in \mathbb {R}\). Since \(\{S_t(z)\}_{t\in \mathbb {T}_{-}}\) is continuous, (14) follows as a Doléans–Dade exponential for the \((\mathcal {F}_t^{\xi },\mathbb {P})\)-martingale \(\{\pi _t\}_{t\in \mathbb {T}}\). \(\square \)
1.4 Proof of Lemma 3.6
Proof
Let \(\varvec{\eta }_t=\varvec{\hat{P}}_t \,\varvec{\xi }_t\) for some \(t\in \mathbb {T}\). Using Proposition 2.2, and the independence of \(\{X_t^{(i)}\}_{t\in \mathbb {T}}\) for \(i=1,\ldots ,n\), we can write
where \(\bar{X}_{t}^{(i)}\) is equal in law to \(X_{t}^{(i)}\), but which is not necessarily mutually independent across \(i=1,\ldots ,n\). Then, \(\varvec{\eta }_t \overset{\text {law}}{=}\varvec{P}_t\otimes \varvec{\bar{\xi }}_t\) given \(\varvec{\hat{P}}_t\), and defining \(\bar{Z}^{(i)}_t= ||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}^{-1}(\varvec{P}^{(i)}_t)^{\top }\varvec{Z}\) whenever \(||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}\ne 0\), the modified information \(\varvec{\bar{\xi }}_t\in \mathbb {R}^{n}\) is as given in (18), which holds since \(||\varvec{P}^{(i)}_t||_{{\mathcal {L}}^2}=0 \Rightarrow \varvec{P}^{(i)}_t=\varvec{0}\) for any \(i=1,\ldots ,n\). Thus, if we also write \(\bar{\xi }^{(i)}(u)\) as the value of \(\{\bar{\xi }^{(i)}_u\}_{u\le t}\) at \(u\in \mathbb {T}\), since \(\mathbb {P}(\varvec{P}^{(i)}_u = \varvec{0}, \varvec{\xi }_u= \varvec{0})=0\) for all \(u\in \mathbb {T}\), we can write
since \(\{\tau ^{(i)}_u\}\) is progressively measurable with respect to \(\{\mathcal {F}_u^{\varvec{\hat{P}},\varvec{\xi }}\}_{u\in \mathbb {T}}\). Given that \(\{\varvec{X}_t\}_{t\in \mathbb {T}}\) satisfies the time-changed Markov invariance, \(\varvec{\bar{X}}_t\) satisfies (6) at each \(\varvec{p}^{(i)}\), and thus, using Lemma 2.7,
for all \(k_i \in \mathbb {N}_+\), any \(k_* \ge \max (k_1,\ldots ,k_n)\), any \(0 \le t^{(i)}_{1}< \cdots< t^{(i)}_{k_i} \le t < T\) where \(||\varvec{P}^{(i)}_{t^{(i)}_{j}}||_{{\mathcal {L}}^2}\ne 0\), and all \((x^{(i)}_{1}, \ldots ,x^{(i)}_{k_i})\in {\mathbb {R}}^{k_i}\), given that \(\bar{z}^{(i)} = ||\varvec{p}^{(i)}||_{{\mathcal {L}}^2}^{-1}(\varvec{p}^{(i)})^{\top }\varvec{z}\) for \(i=1,\ldots ,n\). Also, since
we can enlarge \(\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\) by defining \({\mathcal {G}}_t^{\varvec{\hat{P}},\varvec{\xi }}=\sigma \left( \{\bar{\xi }^{(i)}_u\}_{0\le u\le \tau ^{(i)}_t} : i=1,\ldots , n\right) \bigvee \mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }}\). Then, using the tower property and (79), we get
and (17) follows from (80), the independence of \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\), and since \(Z^{\varvec{\hat{P}}}_t=\int _{\mathbb {R}^n}g(\varvec{z})\pi ^{\varvec{\hat{P}}}_t(\mathrm {d}\varvec{z})\). \(\square \)
1.5 Proof of Lemma 3.16
Proof
Since \(\{\varvec{\hat{P}}_t\}_{t\in \mathbb {T}}\) has finite activity, it has finite number of jumps, and hence, \(\{\pi ^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}}\) can be decomposed into a sum of its continuous and discontinuous components through \(\pi ^{\varvec{\hat{P}}}_t(\mathrm {d}\varvec{z}) = \tilde{\pi }^{\varvec{\hat{P}}}_t(\mathrm {d}\varvec{z}) + \sum _{u\le t} \Delta \pi ^{\varvec{\hat{P}}}_u(\mathrm {d}\varvec{z})\), where \(\{\tilde{\pi }^{\varvec{\hat{P}}}_t\}_{t\in \mathbb {T}_{-}}\) is the continuous part. Using (26), we define \(h\in {\mathcal {C}}^{1,2}(\mathbb {R}_+,\mathbb {R}^n)\) and \(H\in {\mathcal {C}}^{1,2}(\mathbb {R}_+,\mathbb {R}^n)\) as follows:
For any \(t\in [\Lambda _{k-1},\Lambda _k)\) and any k, \(\{{\mathcal {J}}_t\}_{t\in [\Lambda _{k-1},\Lambda _k)}\) and \(\{{\mathcal {J}}^C_t\}_{t\in [\Lambda _{k-1},\Lambda _k)}\) are constant processes. In addition, \(\{\phi ^{C}_t(\varvec{z})\}_{t\in [\Lambda _{k-1},\Lambda _k)}\) is constant since \(\tau ^{(i)}_t<t\) for \(i\in {\mathcal {J}}^{C}_t\). Keeping \(t\in [\Lambda _{k-1},\Lambda _k)\), defining \(\sigma _t(\bar{\varvec{\xi }},\bar{z}^{(i)})\) as stated in the proposition, and following similar steps as in Lemma 2.13, where we have \((\tilde{\pi }^{\varvec{\hat{P}}}_t)^{-1}\mathrm {d}\tilde{\pi }^{\varvec{\hat{P}}}_t = h_t^{-1}\mathrm {d}h_t - H_t^{-1}\mathrm {d}H_t + H_t^{-2}\mathrm {d}\left\langle H_t, H_t \right\rangle - (h_t H_t)^{-1}\mathrm {d}\left\langle h_t, H_t \right\rangle \), we gather all the derivative terms, without presenting them explicitly, as follows:
for \({\mathcal {J}}_{\Lambda _{k-1}}\ne \emptyset \), where \(\sigma _t(\bar{\varvec{\xi }},\bar{z}^{(i)})\ne 0\) \(\mathbb {P}\)-a.s since \(\{\bar{\varvec{\xi }}_{\varvec{\tau }_t}\}_{\Lambda _{k-1}\le t < \Lambda _k}\) is continuous with at least one element of \(\varvec{\tau }_t\) being t. Since linear combinations of \(\{X_{t}^{(i)}\}_{t\in \mathbb {T}}\) are semimartingales, using Proposition 2.12 and (18), each \(\{S_t^{(i,k-1)}(\bar{z}^{(i)})\}_{\Lambda _{k-1}\le t < \Lambda _k}\) is an \((\mathcal {F}_t^{\varvec{\hat{P}},\varvec{\xi }},\mathbb {P})\)-semimartingale for any \(\varvec{\bar{z}}\in \mathbb {R}^n\) with quadratic variation \(\{\langle \bar{X}_{t}^{(i)}, \bar{X}_{t}^{(i)}\rangle \}_{\Lambda _{k-1}\le t< \Lambda _k}=\{Q_t^{(i)}\}_{\Lambda _{k-1}\le t < \Lambda _k}\). For the discontinuous part of \(\{\pi ^{\varvec{\hat{P}}}_u\}_{u\le t}\), using Proposition 3.9 and the continuity of \(\{X^{(i)}_t\}_{t\in \mathbb {T}}\), we have
since \(\Delta \pi ^{\varvec{\hat{P}}}_u(\mathrm {d}\varvec{z})\ne 0\) if and only if at least one coordinate of \(\varvec{P}_u\) in (19) is such that \(||\varvec{P}^{(i)}_{u-}||_{{\mathcal {L}}^2}=0\) at time \(u-\) and \(||\varvec{P}^{(i)}_u||_{{\mathcal {L}}^2}=1\) at time u. Then, \(\tilde{\pi }^{\varvec{\hat{P}}}_t = \int _{0}^t\mathrm {d}\tilde{\pi }^{\varvec{\hat{P}}}_u\) plus (82) gives the result. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mengütürk, L.A. From Irrevocably Modulated Filtrations to Dynamical Equations Over Random Networks. J Theor Probab 36, 845–875 (2023). https://doi.org/10.1007/s10959-022-01201-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-022-01201-0