Skip to main content
Log in

On the observational equivalence of continuous-time deterministic and indeterministic descriptions

  • Original paper in Philosophy of Physics
  • Published:
European Journal for Philosophy of Science Aims and scope Submit manuscript

Abstract

This paper presents and philosophically assesses three types of results on the observational equivalence of continuous-time measure-theoretic deterministic and indeterministic descriptions. The first results establish observational equivalence to abstract mathematical descriptions. The second results are stronger because they show observational equivalence between deterministic and indeterministic descriptions found in science. Here I also discuss Kolmogorov’s contribution. For the third results I introduce two new meanings of ‘observational equivalence at every observation level’. Then I show the even stronger result of observational equivalence at every (and not just some) observation level between deterministic and indeterministic descriptions found in science. These results imply the following. Suppose one wants to find out whether a phenomenon is best modeled as deterministic or indeterministic. Then one cannot appeal to differences in the probability distributions of deterministic and indeterministic descriptions found in science to argue that one of the descriptions is preferable because there is no such difference. Finally, I criticise the extant claims of philosophers and mathematicians on observational equivalence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. For more details on measure theory, see Cornfeld et al. (1982), Doob (1953) and Petersen (1983).

  2. The question of how to interpret this probability is not one of the main topics here. I just mention a popular interpretation. According to the time-average interpretation, the probability of a set A is the long-run fraction of the proportion of time a solution spends in A (cf. Eckmann and Ruelle 1985; Werndl 2009b).

  3. That is, except for Q with μ(Q) = 0.

  4. If (M, Σ M , μ, T t ) is measure-preserving, {Φ(T t ); t ∈ ℝ} is stationary (the proof in Werndl 2009a, Section 3.3, carries over to continuous time).

  5. For instance, a stronger result is that for Kolmogorov systems and any finite-valued observation function the following holds: even if you know the entire infinite history of the stochastic description, you will not be able to predict with certainty which outcome follows next (Uffink 2007, 1012–1014).

  6. The σ-algebra generated by E is the smallest σ-algebra containing E; that is, the σ-algebra \(\bigcap_{\sigma\textnormal{-algebras}\,\Sigma,\,E\subseteq \Sigma}\Sigma\).

  7. The deterministic representation of any stationary description is measure-preserving (the proof in Werndl 2009a, Section 3.3, carries over to continuous time).

  8. By ‘probability distributions’ I mean here nontrivial probability distributions (not all probabilities 0 or 1). Otherwise such a division could never exist. For consider a description composed of two semi-Markov descriptions (cf. Example 2), and an observation function which tells one whether the outcome belongs to the first or the second description. Then the probability distribution obtained by applying the observation function is trivial.

  9. A discrete Bernoulli system is defined as follows. First of all, one needs to understand the definition of a discrete measure-preserving deterministic description (see Definition 12 in Section 8). A doubly-infinite sequence of independent rolls of an N-sided die where the probability of obtaining s k is p k , 1 ≤ k ≤ N, \(\sum_{k=1}^{N}p_{k}\!=\!1\), is a stochastic Bernoulli description. Let M be the set of all bi-infinite sequences m = (...m( − 1), m(0), m(1)...) with \(m(i)\in\bar{M}=\{s_{1},\ldots,s_{N}\}\). Let Σ M be the Σ-algebra generated by the cylinder-sets \(C^{\{s_{l_{1}}\}...\{s_{l_{n}}\}}_{i_{1}\ldots i_{n}}\) as defined in (1) with i j  ∈ ℤ. These sets have probability \(\bar{\mu}(C^{\{s_{l_{1}}\}...\{s_{l_{n}}\}}_{i_{1}\ldots i_{n}})=p_{s_{l_{1}}}...p_{s_{l_{n}}}\). Let μ the unique measure determined by \(\bar{\mu}\) on Σ M . Let T : MM, T((...m(i)...)) = (...m(i + 1)...). (M, Σ M , μ, T) is a discrete description called a Bernoulli shift. Finally, a discrete measure-preserving deterministic description is a Bernoulli system iff it is isomorphic to some Bernoulli shift (isomorphism is exactly defined as for continuous time—see Definition 5).

  10. It does not matter how Φ0(ϕ(m)) is defined for \(m\in M\setminus\hat{M}\).

  11. n-step semi-Markov descriptions are continuous Bernoulli systems too (Ornstein 1974). Therefore, an analogous argument shows that Bernoulli systems in science are observationally equivalent to n-step semi-Markov descriptions.

  12. My arguments allow any meaning of ‘deterministic descriptions deriving from scientific theories’ that excludes the deterministic representation but is wide enough to include some continuous Bernoulli systems.

  13. Also, compared to the results in Section 3, the results here concern a narrower class of descriptions, namely only continuous Bernoulli systems, semi-Markov descriptions and n-step semi-Markov descriptions.

  14. (M, d M ) is separable iff there is a countable set \(\tilde{M}=\{m_{n}\,|n\in\mathbb{N}\}\) with m n  ∈ M such that every nonempty open subset of M contains at least one element of \(\tilde{M}\).

  15. The results here concern the same kind of descriptions as in Section 4, namely continuous Bernoulli systems, semi-Markov descriptions and n-step semi-Markov descriptions.

  16. The question of which description is preferable is an interesting one but requires another paper. xxx is a paper devoted to this question.

  17. It seems worth noting that for the semi-Markov descriptions of Theorem 4 any arbitrary outcome will be followed by at least two different outcomes. And for many Bernoulli systems any of these semi-Markov descriptions is such that none of the transition probabilities is close to 0 or 1 (see Subsection A.3).

  18. The text in braces is in a footnote.

  19. Ornstein and Weiss are mathematicians and not philosophers. So one should not blame them for misguided philosophical claims. Still, one needs to know whether their philosophical claims are tenable; thus I criticise them.

  20. If (M, Σ M , μ, T t ) and a semi-Markov description {Z t ; t ∈ ℝ} are ε-congruent, there is a finite-valued Φ such that {Φ(T t ); t ∈ ℝ} is {Z t ; t ∈ ℝ} (cf. Subsection 5.1).

  21. Deterministic descriptions in science have finite KS-entropy (Ornstein and Weiss 1991, 19).

  22. Suppes (1999, 182) and Winnie (1998, 317) similarly claim that the philosophical significance of the result that some discrete-time descriptions are ε-congruent for all ε > 0 to discrete-time Markov descriptions is that there is a choice between deterministic and stochastic descriptions. Werndl (2009a, Section 4.2.2) already criticised that these claims are too weak but did not explain why (so I did this here).

  23. The term ‘semi-Markov description’ is not defined unambiguously. I follow Ornstein and Weiss (1991).

  24. Again, the term ‘n-step semi-Markov description’ is not used unambiguously; I follow Ornstein and Weiss (1991).

  25. That is, χ A (m) = 1 for m ∈ A and 0 otherwise.

References

  • Berkovitz, J., Frigg, R., & Kronz, F. (2006). The Ergodic Hierarchy, Randomness and Hamiltonian Chaos. Studies in History and Philosophy of Modern Physics, 37, 661–691.

    Article  Google Scholar 

  • Butterfield, J. (2005). Determinism and indeterminism. Routledge Encyclopaedia of Philosophy Online.

  • Cornfeld, I. P., Fomin, S. V., & Sinai, Y. G. (1982). Ergodic theory. Berlin: Springer.

    Google Scholar 

  • Doob, J. L. (1953). Stochastic processes. New York: John Wiley & Sons.

    Google Scholar 

  • Eagle, A. (2005). Randomness is unpredictability. The British Journal for the Philosophy of Science, 56, 749–790.

    Article  Google Scholar 

  • Eckmann, J.-P., & Ruelle, D. (1985). Ergodic theory of chaos and strange attractors. Reviews of Modern Physics, 57, 617–654.

    Article  Google Scholar 

  • Feldman, J., & Smorodinksy, M. (1971). Bernoulli flows with infinite entropy. The Annals of Mathematical Statistics, 42, 381–382.

    Article  Google Scholar 

  • Halmos, P. (1944). In general a measure-preserving transformation is mixing. The Annals of Mathematics, 45, 786–792.

    Article  Google Scholar 

  • Halmos, P. (1949). Measurable transformations. Bulletin of the American Mathematical Society, 55, 1015–1043.

    Article  Google Scholar 

  • Hopf, E. (1932). Proof of Gibbs’ hypothesis on the tendency toward statistical equilibrium. Proceedings of the National Academy of Sciences of the United States of America, 18, 333–340.

    Article  Google Scholar 

  • Janssen, J., & Limnios, N. (1999). Semi-Markov models and applications. Dotrecht: Kluwer Academic Publishers.

    Google Scholar 

  • Krieger, W. (1970). On entropy and generators of measure-preserving transformations. Transactions of the American Mathematical Society, 149, 453–456.

    Article  Google Scholar 

  • Lorenz, E. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20, 130–141.

    Article  Google Scholar 

  • Luzzatto, S., Melbourne, I., & Paccaut, F. (2005). The Lorenz attractor is mixing. Communications in Mathematical Physics, 260, 393–401.

    Article  Google Scholar 

  • Ornstein, D. (1970). Imbedding Bernoulli shifts in flows. In A. Dold, & B. Eckmann (Eds.), Contributions to Ergodic theory and probability, proceedings of the first Midwestern conference on Ergodic theory (pp. 178–218). Berlin: Springer.

    Google Scholar 

  • Ornstein, D. (1974). Ergodic theory, randomness, and dynamical systems. New Haven and London: Yale University Press.

    Google Scholar 

  • Ornstein, D., & Galavotti, G. (1974). Billiards and Bernoulli schemes. Communications in Mathematical Physics, 38, 83–101.

    Article  Google Scholar 

  • Ornstein, D., & Weiss, B. (1991). Statistical properties of chaotic systems. Bulletin of the American Mathematical Society, 24, 11–116.

    Article  Google Scholar 

  • Park, K. (1982). A special family of Ergodic flows and their \(\bar{d}\)-limits. Israel Journal of Mathematics, 42, 343–353.

    Article  Google Scholar 

  • Petersen, K. (1983). Ergodic theory, Cambridge: Cambridge University Press.

    Google Scholar 

  • Radunskaya, A. (1992). Statistical properties of deterministic Bernoulli flows. Ph.D. Dissertation, Stanford: University of Stanford.

  • Simányi, N. (2003). Proof of the Boltzmann-Sinai Ergodic hypothesis for typical hard disk systems. Inventiones Mathematicae, 154, 123–178.

    Article  Google Scholar 

  • Sinai, Y. G. (1989). Kolmogorov’s work on Ergodic theory. The Annals of Probability, 17, 833–839.

    Article  Google Scholar 

  • Strogatz, S. H. (1994). Nonlinear dynamics and chaos, with applications to physics, biology, chemistry, and engineering. New York: Addison Wesley.

    Google Scholar 

  • Suppes, P. (1999). The noninvariance of deterministic causal models. Synthese, 121, 181–198.

    Article  Google Scholar 

  • Suppes, P., & de Barros, A. (1996). Photons, billiards and chaos. In P. Weingartner, & G. Schurz (Eds.), Law and prediction in the light of chaos research (pp. 189–207). Berlin: Springer.

    Chapter  Google Scholar 

  • Uffink, J. (2007). Compendium to the foundations of classical statistical physics. In J. Butterfield, & J. Earman (Eds.), Philosophy of physics (handbooks of the philosophy of science B) (pp. 923–1074). Amsterdam: North-Holland.

    Google Scholar 

  • Werndl, C. (2009a). Are deterministic descriptions and indeterministic descriptions observationally equivalent? Studies in History and Philosophy of Modern Physics, 40, 232–242.

    Article  Google Scholar 

  • Werndl, C. (2009b). What are the new implications of chaos for unpredictability? The British Journal for the Philosophy of Science, 60, 195–220.

    Article  Google Scholar 

  • Werndl, C. (2011). On choosing between deterministic and indeterministic descriptions: underdetermination and indirect evidence (unpublished manuscript).

  • Winnie, J. (1998). Deterministic chaos and the nature of chance. In J. Earman, & J. Norton (Eds.), The cosmos of science—essays of exploration (pp. 299–324). Pittsburgh: Pittsburgh University Press.

    Google Scholar 

Download references

Acknowledgements

I am indebted to Jeremy Butterfield and Jos Uffink for valuable suggestions and stimulating discussion on previous versions of this manuscript. For useful comments I also want to thank Franz Huber, James Ladyman, Miklós Rédei, and the audiences at the Philosophy of Physics Seminar, Cambridge University, the Philosophy of Physics Research Seminar, Oxford University, and the Philosophy and History of Science Seminar, Bristol University. Many thanks also to two anonymous referees and the editor Carl Hoefer for valuable suggestions. I am grateful to the Queen’s College, Oxford University, for supporting me with a junior research fellowship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charlotte Werndl.

Appendix

Appendix

1.1 A.1. Semi-Markov and n-step Semi-Markov descriptions

First, we need the following definition.

Definition 9

A discrete stochastic description {Z t ; t ∈ ℤ} is a family of random variables Z t , t ∈ ℤ, from a probability space (Ω, ΣΩ, ν) to a measurable space \((\bar{M},\Sigma_{\bar{M}})\).

Semi-Markov and n-step semi-Markov descriptions are defined via discrete-time Markov and n-step Markov descriptions (for a definition of these well-known descriptions, see Doob (1953), where it is also explained what it means for these descriptions to be irreducible and aperiodic).

A semi-Markov descriptionFootnote 23 is defined with help of a discrete stochastic description {(S k , T k ), k ∈ ℤ}. {S k ; k ∈ ℤ} describes the successive outcomes s i visited by the process, where at time 0 the outcome is denoted by S 0. T 0 describes the time interval after which there is the first jump of the process after the time 0, T  − 1 describes the time interval after which there is the last jump of the process before time 0, and all other T k similarly describe the time-intervals between jumps of the process. Because at time 0 the process takes the value S 0 and the process is in S 0 for the time u(S 0), T  − 1 = u(S 0) − T 0.

Technically, {(S k , T k ), k ∈ ℤ} satisfies the following conditions: (i) S k  ∈ S = {s 1, ..., s N }, N ∈ ℕ; \(T_{k}\in U=\{u_{1},\ldots,u_{\bar{N}}\}\), \(\bar{N}\in\mathbb{N}\), \(\bar{N}\leq N\), for k ≠ 0, − 1, where \(u_{i}\in\mathbb{R}^{+}\), \(1\leq i \leq \bar{N}\); T 0 ∈ (0, u(S 0)], T  − 1 ∈ [0, u(S 0)), where u : SU, s i u(s i ), is surjective; and hence \(\bar{M}=S\times [0, \max_{i}u_{i}]\); (ii) \(\Sigma_{\bar{M}}=\mathbb{P}(S)\times L([0, \max_{i}u_{i}])\), where L([0, max i u i ]) is the Lebesgue σ-algebra on [0, max i u i ]; (iii) {S k ; k ∈ ℤ} is a stationary irreducible and aperiodic discrete-time Markov description with outcome space S; \(p_{s_{i}}=P\{S_{0}=s_{i}\}>0\), for all i, 1 ≤ i ≤ N; (iv) T k  = u(S k ) for k ≥ 1, T k  = u(S k − 1) for k ≤ − 2, and T  − 1 = u(S 0) − T 0; (v) for all i, 1 ≤ i ≤ N, \(P(T_{0}\in A\,|\,S_{0}=s_{i})=\int_{A} 1/u(s_{i})d\lambda\) for all A ∈ L((0, u(s i )]), where L((0, u(s i )]) is the Lebesgue σ-algebra and λ is the Lebesgue measure on (0, u(s i )].

Definition 10

A semi-Markov description is a stochastic description {Z t ; t ∈ ℝ} with outcome space S constructed via a stochastic description {(S k , T k ), k ∈ ℤ} as follows:

$$ \begin{array}{rll} Z_{t}&=&S_{0}\,\,\textnormal{for}\,\,-T_{-1}\leq t<T_{0}, \\ Z_{t}&=&S_{k}\,\,\textnormal{for}\,\,T_{0}+\ldots + T_{k-1}\leq t < T_{0}+\ldots + T_{k}; k\geq 1\,\,\textnormal{and thus}\,\,t\geq T_{0}, \\ Z_{t}&\!=\!&S_{-k}\,\textnormal{for}\!-\!T_{-1}\!-\!\ldots\!-\!T_{-k-1}\leq t < \!-\!T_{\!-1}\!-\!\ldots \!-\!T_{\!-k}; k\!\geq\! 1\,\textnormal{and thus}\,t<\!-\!T_{\!-\!1}, \end{array} $$

and for all i, 1 ≤ i ≤ N,

$$ P(Z_{0}=s_{i})=\frac{p_{s_{i}}u(s_{i})}{p_{s_{1}}u(s_{1})+\ldots +p_{s_{N}}u(s_{N})}. $$
(2)

Semi-Markov descriptions are stationary (Ornstein 1970 and Ornstein 1974, 56–61). In this paper I assume that the elements of U are irrationally related (u i and u j are irrationally related iff \(\frac{u_{i}}{u_{j}}\) is irrational; and the elements of \(U=\{u_{1},\ldots,u_{\bar{N}}\}\) are irrationally related iff for all i, j, i ≠ j, u i and u j are irrationally related).

n-step semi-Markov descriptionsFootnote 24 generalise semi-Markov descriptions.

Definition 11

n-step semi-Markov descriptions are defined like semi-Markov descriptions except that condition (iii) is replaced by: (iii’) {S k ; k ∈ ℤ} is a stationary irreducible and aperiodic discrete-time n-step Markov description with outcome space S and \(p_{s_{i}}=P\{S_{0}=s_{i}\}>0\), for all i, 1 ≤ i ≤ N.

Again, n-step semi-Markov descriptions are stationary (Park 1982), and I assume that the elements of U are irrationally related.

1.2 A.2 Proof of Theorem 1

Iff for a measure-preserving deterministic description (M, Σ M , μ, T t ) there does not exist a n ∈ ℝ +  and a C ∈ Σ M , 0 < μ(C) < 1, such that, except for a set of measure zero (esmz.) T n (C) = C, then the following holds: for every nontrivial finite-valued observation function Φ : MM O , every k ∈ ℝ +  and {Z t ; t ∈ ℝ} = {Φ(T t ); t ∈ ℝ} there are o i , o j  ∈ M O with 0 < P{Zt + k = o j | Z t  = o i } < 1.

Proof

We need the following definitions.□

Definition 12

A discrete measure-preserving deterministic description is a quadruple (M, Σ M , μ, T) where (M, Σ M , μ) is a probability space and T : MM is a bijective measurable function such that T  − 1 is measurable and μ(T(A)) = μ(A) for all A ∈ Σ M .

Definition 13

A discrete measure-preserving deterministic description (M, Σ M , μ, T) is ergodic iff there is no A ∈ Σ M , 0 < μ(A) < 1, such that, esmz., T(A) = A.

(M, Σ M , μ, T) is ergodic iff for all A, B ∈ Σ M (Cornfeld et al. 1982, 14–15):

$$ \lim\limits_{n\rightarrow \infty}\frac{1}{n}\sum\limits_{i=1}^{n}(\mu(T^{n}(A)\cap B)-\mu(A)\mu(B))=0. $$
(3)

Definition 14

α = {α1, ..., α n }, n ∈ ℕ, is a partition of (M, Σ M , μ) iff α i  ∈ Σ M , μ(α i ) > 0, for all i, 1 ≤ i ≤ n, α i  ∩ α j  = ∅ for all i ≠ j, 1 ≤ i, j ≤ n, and \(M=\bigcup_{i=1}^{n}\alpha_{i}\).

A partition is nontrivial iff n ≥ 2. Given two partitions α = {α 1, ..., α n } and β = {β 1, ..., β l } of (M, Σ M , μ), α ∨ β is the partition {α i  ∩ β j | i = 1, ..., n;j = 1, ..., l}. Given a deterministic description (M, Σ M , μ, T t ), if α is a partition, T t α = {T t (α 1), ..., T t (α n )}, t ∈ ℝ, is also a partition.

Note that any finite-valued observation function Φ (cf. Subsection 2.1) can be written as: \(\Phi(m)=\sum_{i=1}^{n}o_{i}\chi_{\alpha_{i}}(m)\), M O  = {o i | 1 ≤ i ≤ n}, for some partition α of (M, Σ M , μ), where χ A is the characteristic function of A (cf. Cornfeld et al. 1982, 179).Footnote 25 It suffices to prove the following:

(*) Iff for (M, Σ M , μ, T t ) there does not exist an n ∈ ℝ +  and a C ∈ Σ M , 0 < μ(C) < 1, such that, esmz., T n (C) = C, then the following holds: for any nontrivial partition α = {α 1, ..., α r }, r ∈ ℕ, and all k ∈ ℝ +  there is an i ∈ {1, ..., r} such that for all j, 1 ≤ j ≤ r, μ(T k (α i ) ∖ α j ) > 0.

For finite-valued observation functions are of the form \(\sum_{l=1}^{r}o_{l}\chi_{\alpha_{l}}(m)\), where α = {α 1, ..., α r } is a partition and \(M_{O}=\cup_{l=1}^{r}o_{l}\). Consequently, the right hand side of (*) expresses that for any nontrivial finite-valued Φ : MM O and all k ∈ ℝ +  there is an o i  ∈ M O such that for all o j  ∈ M O , P{Z t + k = o j | Z t  = o i } < 1, or equivalently, that there are o i , o j  ∈ M O with 0 < P{Z t + k = o j | Z t  = o i } < 1.

  1. ⇐:

    Assume that there is an n ∈ ℝ +  and a C ∈ Σ M , 0 < μ(C) < 1, such that, esmz., T n (C) = C. Then for α = {C,M ∖ C} it holds that μ(T n (C) ∖ C) = 0 and μ(T n (M ∖ C) ∖ (M ∖ C)) = 0.

  2. ⇒:

    Assume that the conclusion of (*) does not hold, and hence that there is a nontrivial partition α and a k ∈ ℝ +  such that for each α i there is an α j with, esmz., \(T_{k}(\alpha_{i})\subseteq \alpha_{j}\). From the assumptions it follows that for every k ∈ ℝ +  the discrete deterministic description (M, Σ M , μ, T k ) is ergodic (cf. Definition 13).

    • Case 1: For every i there is a j such that, esmz., T k (α i ) = α j . Because the discrete description (M, Σ M , μ, T k ) is ergodic (Eq. 3), there is an h ∈ ℕ such that, esmz., T kh (α1) = α1. But this is in contradiction with the assumption that it is not the case that there exists an n ∈ ℝ +  and a C ∈ Σ M , 0 < μ(C) < 1, such that, esmz., T n (C) = C.

    • Case 2: There exists an i and a j with, esmz., T k (α i ) ⊂ α j and with μ(α i ) < μ(α j ). Because the discrete description (M, Σ M , μ, T k ) is ergodic (Eq. 3), there is a h ∈ ℕ such that, esmz., \(T_{hk}(\alpha_{j})\subseteq\alpha_{i}\). Hence μ(α j ) ≤ μ(α i ), contradicting μ(α i ) < μ(α j ) ≤ μ(α i ).

1.3 A.3 Proof of Theorem 4

Theorem 4

Let (M, Σ M , μ, T t ) be a continuous Bernoulli system. Then for every finite-valued Φ and every ε > 0 a semi-Markov description {Z t , t ∈ ℝ} weakly (Φ, ε)-simulates (M, Σ M , μ, T t ).

Proof

Let (M, Σ M , μ, T t ) be a continuous Bernoulli system, let Φ : MS, S = {s 1, ..., s N }, N ∈ ℕ, be a surjective observation function, and let ε > 0. Theorem 3 implies that there is a surjective observation function \(\Theta:M\rightarrow S,\,\,\Theta(m)=\sum_{i=1}^{N}s_{i}\chi_{\alpha_{i}}(m)\), for a partition α (cf. Definition 14), such that {Y t  = Θ(T t ); t ∈ ℝ} is an n-step semi-Markov description with outcomes s i and corresponding times u(s i ) which strongly (Φ, ε)-simulates (M, Σ M , μ, T t ). □

I need the following definition:

Definition 15

\((M_{2},\Sigma_{M_{2}},\mu_{2},T^{2}_{t})\) is a factor of \((M_{1},\Sigma_{M_{1}},\mu_{1},T^{1}_{t})\) (where both deterministic descriptions measure-preserving) iff there are \(\hat{M}_{i}\subseteq M_{i}\) with \(\mu_{i}(M_{i}\setminus\hat{M}_{i})=0\), \(T^{i}_{t}\hat{M}_{i}\subseteq\hat{M}_{i}\) for all t (i = 1, 2), and there is a function \(\phi:\hat{M}_{1}\!\rightarrow \!\hat{M}_{2}\) such that (i) \(\phi^{-1}(B)\!\in\!\Sigma_{M_{1}}\) for all \(B\!\in\!\Sigma_{M_{2}},A\subseteq\hat{M}_{2}\); (ii) \(\mu_{1}(\phi^{-1}(B))=\mu_{2}(B)\) for all \(B\in\Sigma_{M_{2}},\,B\subseteq\hat{M}_{2}\); (iii) \(\phi(T^{1}_{t}(m))=T^{2}_{t}(\phi(m))\) for all \(m\in\hat{M}_{1},\)t ∈ ℝ.

Note that the deterministic representation (X, Σ X , μ X , W t , Λ0) of {Y t ; t ∈ ℝ} is a factor of (M, Σ M , μ, T t ) via ϕ(m) = r m (r m is the realisation of m) (cf. Ornstein and Weiss 1991, 18).

Now I construct a measure-preserving deterministic description (K, Σ K , μ K , R t ) as follows. Let (Ω, ΣΩ, μ Ω, V, Ξ0), \(\Xi_{0}(\omega)=\sum_{i=1}^{N}s_{i}\chi_{\beta_{i}}(\omega)\), where β is a partition, be the deterministic representation of {S k ; k ∈ ℤ}, the irreducible and aperiodic n-step Markov description corresponding to {Y t ; t ∈ ℝ}. Let f : Ω → {u 1, ..., u N }, f(ω) = u0(ω)). Define K as \(\cup_{i=1}^{N}K_{i}=\cup_{i=1}^{N}(\beta_{i}\times [0,u(s_{i})))\). Let \(\Sigma_{K_{i}}\), 1 ≤ i ≤ N, be the product σ-algebra (ΣΩ ∩ β i ) × L([0, u(s i ))) where L([0, u(s i ))) is the Lebesgue σ-algebra of [0, u(s i )). Let \(\mu_{K_{i}}\) be the product measure

$$ \left(\mu_{\Omega}^{\Sigma_{\Omega}\cap\beta_{i}}\times\lambda([0,u(s_{i})))\right)/\sum\limits_{j=1}^{N}u\left(s_{j}\right)\mu_{\Omega}(\beta_{j}), $$
(4)

where λ([0, u(s i ))) is the Lebesgue measure on [0, u(s i )) and \(\mu_{\Omega}^{\Sigma_{\Omega}\cap\beta_{i}}\) is the measure μ Ω restricted to ΣΩ ∩ β i . Let Σ K be the σ-algebra generated by \(\cup_{i=1}^{N}\Sigma_{K_{i}}\). Define a pre-measure \(\bar{\mu}_{K}\) on \(H=(\cup_{i=1}^{N}(\Sigma_{\Omega}\cap\beta_{i}\times L([0,s_{i}))))\cup K\) by \(\bar{\mu}_{K}(K)=1\) and \(\bar{\mu}_{K}(A)=\mu_{K_{i}}(A)\) for \(A\in\Sigma_{K_{i}}\), and let μ K be the unique extension of this pre-measure to a measure on Σ K . Finally, R t is defined as follows: let the state at time zero be represented by (k, v) ∈ K, k ∈ Ω, v < f(k); the representation of the state moves vertically with unit velocity, and just before it reaches (k, f(k)) it jumps to (V(k), 0) at time f(k) − v; then it again moves vertically with unit velocity, and just before it reaches (V(k), f(V(k)))) it jumps to (V 2(k), 0) at time f(V(k)) + f(k) − v, and so on. (K, Σ K , μ K , R t ) is a measure-preserving deterministic description (a ‘flow built under the function f’). (X, Σ X , μ X , W t ) is proven to be isomorphic (via a function ψ) to (K, Σ K , μ K , R t ) (Park 1982).

Consider \(\gamma\!=\!\{\gamma_{1},\!\ldots,\! \gamma_{l}\}\!=\!\beta\vee V\beta\vee\ldots\vee V^{n-1}\beta\) and \(\Pi(\omega)=\sum_{j=1}^{l}o_{j}\chi_{\gamma_{j}}(\omega)\), o i  ≠ o j for i ≠ j, 1 ≤ i, j ≤ l. I now show that \(\{B_{t}=\Pi(V^{t}(\omega));\,\,t\in\mathbb{Z}\}\) is an irreducible and aperiodic Markov discrete-time stochastic description. By construction, for all t and all i, 1 ≤ i ≤ l, there are q i,0, ..., q i,n − 1 ∈ S such that

$$ P\{B_{t}=o_{i}\}=P\left\{S_{t}=q_{i,0}, S_{t+1}=q_{i,1},\ldots,S_{t+n-1}=q_{i,n-1}\right\}. $$
(5)

Therefore, for all k ∈ ℕ and all i, j 1, ..., j k , 1 ≤ i, j 1, ..., j k  ≤ l:

$$ \begin{array}{rll} P\left\{B_{t+1}\right. &=& \left.o_{i}\,|\,B_{t}=o_{j_{1}},\ldots,B_{t-k+1}=o_{j_{k}}\right\} \\ &=& P \left\{S_{t+1} = q_{i,0},...,S_{t+n} = q_{i,n-1}|S_{t}= q_{j_{1},0},...,S_{t+n-1}\right. \\ &&\qquad\qquad =\left. q_{i,n-2}, S_{t-1} = q_{j_{2},0},...,S_{t-k+1} = q_{j_{k},0}\right\} \\ &=&P\left\{S_{t+1}=q_{i,0},\ldots,S_{t+n}=q_{i,n-1}\,|\,S_{t}=q_{j_{1},0},\ldots,S_{t+n-1}=q_{i,n-2}\right\} \\ &=&P\left\{B_{t+1}=o_{i}\,|\,B_{t}=o_{j_{1}}\right\}, \end{array} $$
(6)

if \(P\{B_{t+1}=o_{i},B_{t}=o_{j_{1}},\ldots,B_{t-k+1}=o_{j_{k}}\}>0\). Hence {B t ; t ∈ ℤ} is a Markov description. Now a discrete measure-preserving deterministic description (M, Σ M , μ, T) is mixing iff for all A, B ∈ Σ M

$$ \lim\limits_{t\rightarrow\infty}\mu(T^{t}(A)\cap B)=\mu(A)\mu(B). $$
(7)

The deterministic representation of every irreducible and aperiodic n-step Markov description, and hence (Ω, ΣΩ, μ Ω, V, Ξ0), is mixing (Ornstein 1974, 45–47). This implies that {B t ; t ∈ ℤ} is irreducible and aperiodic.

Let \(\Delta(k)=\sum_{i=1}^{l}o_{i}\chi_{\gamma_{i}\times[0,u(o_{i}))}(k)\), where u(o i ), 1 ≤ i ≤ l, is defined as follows: u(o i ) = u(s r ) where \(\gamma_{i}\subseteq \beta_{r}\). It follows immediately that {X t  = Δ(R t ); t ∈ ℝ} is a semi-Markov description. Consider the surjective function Ψ : M → {o 1, ..., o l }, Ψ (m) = Δ(ψ(ϕ(m))) for \(m\in\hat{M}\) and o 1 otherwise. Recall that (X, Σ X , μ X , W t ) is a factor (via ϕ) of (M, Σ M , μ, T t ) and that (X, Σ X , μ X , W t ) is isomorphic (via ψ) to (K, Σ K , μ K , R t ). Therefore, {Z t  = Ψ(T t ); t ∈ ℝ} is a semi-Markov process with outcomes o i and times u(o i ), 1 ≤ i ≤ l.

Now consider the surjective observation function Γ : {o 1, ..., o l } → S, where Γ(o i ) = s r for \(\gamma_{i}\subseteq\beta_{r}\), 1 ≤ i ≤ l. By construction, esmz., Γ(Ψ(T t (m))) =Θ(T t (m)) = Y t (m) for all t ∈ ℝ. Hence, because {Y t ; t ∈ ℝ} strongly (Φ, ε)-simulates (M M , μ, T t ), μ({m ∈ M | Γ(Ψ(m)) ≠ Φ(m)}) < ε.

1.4 A.4 Proof of Proposition 1

Proposition 1

Let (M, Σ M , μ, T t ) be a measure-preserving deterministic description where (M, d M ) is separable and where Σ M contains all open sets of (M, d M ). Assume that (M, Σ M , μ, T t ) satisfies the assumption of Theorem 1 and has finite KS-entropy. Then for every ε > 0 there is a stochastic description {Z t ; t ∈ ℝ} with outcome space \(M_{O}=\cup_{l=1}^{h}o_{l}\), h ∈ ℕ, such that {Z t ; t ∈ ℝ} is ε-congruent to (M, Σ M , μ, T t ), and for all k ∈ ℝ +  there are o i , o j  ∈ M O such that 0 < P{Zt + k = o j | Z t  = o i } < 1.

Proof

A partition α (cf. Definition 14) is generating for the discrete measure-preserving description (M, Σ M , μ, T) iff for every A ∈ Σ M there is an n ∈ ℕ and a set C of unions of elements in \(\vee_{j=-n}^{n}T^{j}(\alpha)\) such that μ((A ∖ C) ∪ (C ∖ A)) < ε (Petersen 1983, 244). α is generating for the measure-preserving description (M, Σ M , μ, T t ) iff for all A ∈ Σ M there is a τ ∈ ℝ +  and a set C of unions of elements in \(\bigcup_{\textnormal{all}\,\,m}\bigcap_{t=-\tau}^{\tau}(T^{-t}(\alpha(T^{t}(m))))\) such that μ((A ∖ C) ∪ (C ∖ A)) < ε (α(m) is the set α j  ∈ α with m ∈ α j ). □

By assumption, there is a \(t_{0}\in\mathbb{R}^{+}\) such that the discrete description \((M,\Sigma_{M},\mu,T_{t_{0}})\) is ergodic (cf. Definition 13). For discrete ergodic descriptions Krieger’s (1970) theorem implies that there is a partition α which is generating for \((M,\Sigma_{M},\mu,T_{t_{0}})\) and hence generating for (M, Σ M , μ, T t ). Since (M, d M ) is separable, for every ε > 0 there is a r ∈ ℕ and m i  ∈ M, 1 ≤ i ≤ r, such that \(\mu(M\setminus\cup_{i=1}^{r}B(m_{i},\frac{\varepsilon}{2}))<\frac{\varepsilon}{2}\). Because α is generating for \((M,\Sigma_{M},\mu,T_{t_{0}})\), for each \(B(m_{i},\frac{\varepsilon}{2})\) there is an n i  ∈ ℕ and a C i of union of elements in \(\vee_{j=-n_{i}}^{n_{i}}T_{jt_{0}}(\alpha)\) such that \(\mu((B(m_{i},\frac{\varepsilon}{2})\setminus C_{i})\cup (C_{i}\setminus B(m_{i},\frac{\varepsilon}{2})))<\frac{\varepsilon}{2r}\). Let \(n\!=\!\max\{n_{i}\},\,\,\beta\!=\!\{\beta_{1},\ldots,\beta_{l}\}\!=\!\vee_{j=-n}^{n}T_{jt_{0}}(\alpha)\) and \(\Psi\!(m)=\!\sum_{i=1}^{l}o_{i}\chi_{\beta_{i}}(m)\) with o i  ∈ β i . Since Ψ is a finite-valued, Theorem 1 implies that for the stochastic description {Ψ(T t ); t ∈ ℝ} for all k ∈ ℝ +  there are o i , o j such that 0 < P{Z t + k = o j | Z t  = o i } < 1. Because α is generating for (M, Σ M , μ, T t ), β is generating too. This implies that (M, Σ M , μ, T t ) is isomorphic (via a function ϕ) to the deterministic representation \((M_{2},\Sigma_{M_{2}},\mu_{2},T^{2}_{t},\Phi_{0})\) of {Z t ; t ∈ ℝ} (Petersen 1983, 274). And, by construction, d M (m, Φ0(ϕ(m))) < ε except for a set in M smaller than ε.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Werndl, C. On the observational equivalence of continuous-time deterministic and indeterministic descriptions. Euro Jnl Phil Sci 1, 193–225 (2011). https://doi.org/10.1007/s13194-010-0011-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13194-010-0011-5

Keywords

Navigation