1 Introduction

The concept of entropy plays the central role in information theory [2]; the entropy quantifies the amount of information involved in the value of the outcome of a random process. The entropy has found also applications in other areas, including physics, computer science, statistics, chemistry, biology, sociology, general systems theory and many others and, in addition, the whole new technology and telecommunications industry is based on this quantification of information. The study of the concept of entropy is therefore very important in modern scientific disciplines. As is well known the standard approach in classical information theory is based on the Shannon entropy [3]. We recall that the Shannon entropy of a probability distribution \(P = \{ p_{1},p_{2},\ldots,p_{n} \}\) is the number \(H_{S}(P) = \sum_{i = 1}^{n} S(p_{i})\), where \(S: [ 0, 1 ] \to [ 0, \infty)\) is the Shannon entropy function defined by

$$ S(x) = \left \{ \textstyle\begin{array}{l@{\quad}l} - x\log x, &\mbox{if } x \in ( 0, 1 ]; \\ 0, &\mbox{if } x = 0. \end{array}\displaystyle \right . $$
(1.1)

The Kolmogorov–Sinai entropy [47] provides an important generalization of Shannon entropy; it has strongly influenced understanding of the complexity of dynamical systems. The concept has shown its strength through the highly adequate answers to central problems in the classification of dynamical systems. Two metrically isomorphic dynamical systems have the same Kolmogorov–Sinai entropy, so the Kolmogorov–Sinai entropy is a tool for distinguishing non-isomorphic dynamical systems.

To address some specific problems, it is preferable to use instead of Shannon entropy an approach based on the concept of logical entropy [1, 8, 9] (see also [1016]). In [1], the classical logical entropy was discussed by Elllerman as an alternative measure of information. If \(P = \{ p_{1},p_{2},\ldots,p_{n} \}\) is a probability distribution, then the logical entropy of P is defined by the formula \(H_{L}(P) = \sum_{i = 1}^{n} L(p_{i})\), where \(L: [ 0, 1 ] \to[ 0, \infty )\) is the logical entropy function defined, for every \(x \in [ 0, 1 ]\), by the formula:

$$ L(x) = x(1 - x). $$
(1.2)

The main aim of this paper is to extend the study of logical entropy presented in [1] to the case of dynamical systems; by replacing the Shannon entropy function (1.1) by the logical entropy function (1.2) we construct an isomorphism theory of the Kolmogorov–Sinai type. The paper is organized as follows. Section 2 provides basic definitions and notations, which will be used in the following sections. Our results are presented in Sects. 3 and 4. In Sect. 3, we define and study the logical entropy and conditional logical entropy of finite measurable partitions. In Sect. 4, using the concept of logical entropy of measurable partitions, the notion of logical entropy of a dynamical system is introduced. It is proved that metrically isomorphic dynamical systems have the same logical entropy. Finally, a version of the Kolmogorov–Sinai theorem for the case of the logical entropy is proved. Some concluding remarks are presented in the last section.

2 Preliminaries

Modern probability theory is almost exclusively based on the axioms of Kolmogorov [17]. Let us start by recalling Kolmogorov’s concept of probability space. We consider a non-empty set Ω, some subsets of Ω will be called events. Denote by S the family of all events. In the classical probability theory [18] there is assumed that S is a σ-algebra, i.e., S is a family of subsets of Ω such that (i) \(\Omega\in S\); (ii) if \(A \in S\), then \(\Omega- A \in S\); (iii) if \(A_{n} \in S\) (\(n = 1, 2,\ldots\)), then \(\bigcup_{n = 1}^{ \infty} A_{n} \in S\). The couple \((\Omega, S)\) is said to be a measurable space, the elements of S are said to be measurable.

Let \((\Omega, S)\) be a measurable space. A mapping \(\mu: S \to [ 0, 1 ]\) is called a probability measure if the following properties are satisfied: (i) \(\mu(\Omega) = 1\); (ii) \(\mu(A) \ge0\), for every \(A \in S\); (iii) if \(\{ A_{n} \}_{n = 1}^{\infty} \subset S\) such that \(A_{i} \cap A_{j} = \emptyset\) whenever \(i \ne j\), then \(\mu(\bigcup_{n = 1}^{ \infty} A_{n} ) = \sum_{n = 1}^{\infty} \mu(A_{n})\). The above described triplet \((\Omega, S, \mu)\) is said to be a probability space.

Further we present definitions of basic terms that we will need in the following sections.

Definition 2.1

([19])

Let \((\Omega, S)\) be a measurable space. Each finite sequence \(\{ A_{1},A_{2},\ldots, A_{n} \}\) of pairwise disjoint measurable subsets of Ω such that \(\bigcup_{i = 1}^{ n} A_{i} = \Omega\) is called a (measurable) partition of Ω.

Definition 2.2

([19])

Let \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) and \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\) be two partitions of Ω. The partition β is said to be a refinement of α if for each \(A_{i} \in\alpha\) there exists a subset \(I_{i} \subset \{ 1,2,\ldots,m \}\) such that \(A_{i} = \bigcup_{j \in I_{i}}B_{j}\), \(I_{i} \cap I_{j} = \emptyset\) for \(i \ne j\), and \(\bigcup_{i = 1}^{n}I_{i} = \{ 1,2,\ldots,m \}\). In this case we write \(\alpha\prec\beta\).

Definition 2.3

([19])

Given two partitions \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) and \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\) of Ω, their join \(\alpha\vee\beta\) is defined as the system

$$\alpha\vee\beta= \{ A_{i} \cap B_{j}; i = 1,2,\ldots,n, j = 1,2,\ldots,m \}. $$

Remark 2.1

It is easy to see that if \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) and \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\) are two partitions of Ω, then \(\alpha\vee\beta\) is also a partition of Ω. Moreover, \(\alpha\prec\alpha\vee\beta\), and \(\beta\prec\alpha\vee\beta\). Since the system \(\alpha\vee\beta\) is indexed by \(\{ ( i,j ); i = 1,2,\ldots,n, j = 1,2,\ldots,m \}\), we put \(I_{i} = \{ ( i,1 ),\ldots, ( i,m ) \}\), \(i = 1,2,\ldots,n\). By the assumption we have \(\bigcup_{j = 1}^{m}B_{j} = \Omega\), therefore we get

$$A_{i} = \Omega\cap A_{i} = \Biggl( \bigcup _{j = 1}^{m}B_{j}\Biggr) \cap A_{i} = \bigcup_{j = 1}^{m}(B_{j} \cap A_{i}) = \bigcup_{(r,j) \in I_{i}}(A_{r} \cap B_{j}) $$

for \(i = 1,2,\ldots,n\). But this means that \(\alpha\prec\alpha\vee \beta\).

Definition 2.4

([19])

Let α and β be two partitions of Ω. Then \(\alpha\subset^{ \circ} \beta\) if for each \(A \in\alpha\) there exists \(B \in\beta\) such that \(\mu ( A \Delta B ) = 0\), where \(A \Delta B = (A - B) \cup(B - A)\) denotes the symmetric difference of sets \(A,B \in S\). We write \(\alpha\approx \beta\) if \(\alpha\subset^{ \circ} \beta\) and \(\beta\subset^{ \circ} \alpha\).

Remark 2.2

The relation ≈ is an equivalence relation in the family of all measurable partitions of Ω.

3 Logical entropy of measurable partitions

In this section, we introduce the concept of logical entropy of measurable partitions and present basic properties of this measure of information. It is shown that it has properties analogous to properties of Shannon’s entropy of measurable partitions.

Definition 3.1

Let \(\alpha=\{A_{1},A_{2},\ldots,A_{n}\}\) be a partition of Ω. The logical entropy of α is defined as the number

$$ H_{L}(\alpha) =\sum_{i = 1}^{n} \mu(A_{i}) \bigl(1 - \mu (A_{i})\bigr). $$
(3.1)

Remark 3.1

Since \(\sum_{i = 1}^{n} \mu ( A_{i} ) = \mu ( \bigcup_{i = 1}^{n}A_{i}) = \mu(\Omega) = 1\), we can also write

$$H_{L}(\alpha) =1 - \sum_{i = 1}^{n} \bigl( \mu(A_{i}) \bigr)^{2}. $$

Remark 3.2

Evidently, \(H_{L}(\alpha) \ge0\). For the uniform distribution \(p_{i} = \mu(A_{i}) = \frac{1}{n}\), \(i = 1,2,\ldots,n\), over \(\alpha=\{A_{1},A_{2},\ldots,A_{n}\}\) the logical entropy \(H_{L}(\alpha)\) has its maximum value of \(1 - \frac{1}{n}\). Thus \(0 \le H_{L}(\alpha) \le1 - \frac{1}{n}\).

Theorem 3.1

Let α and β be two partitions of Ω. Then

  1. (i)

    \(\alpha\prec\beta\) implies \(H_{L}(\alpha) \le H_{L}(\beta )\);

  2. (ii)

    \(H_{L}(\alpha\vee\beta) \ge\max(H_{L}(\alpha); H_{L}(\beta))\).

Proof

Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\), \(\alpha\prec \beta\). Then for each \(A_{i} \in\alpha\) there exists a subset \(I_{i} \subset \{ 1,2,\ldots,m \}\) such that \(A_{i} = \bigcup_{j \in I_{i}}B_{j}\), \(I_{i} \cap I_{j} = \emptyset\) for \(i \ne j\), and \(\bigcup_{i = 1}^{n}I_{i} = \{ 1,2,\ldots,m \}\). Therefore we can write

$$\begin{aligned} H_{L}(\alpha) &=\sum _{i = 1}^{n} \mu(A_{i}) \bigl(1 - \mu(A_{i})\bigr) = \sum_{i = 1}^{n} \bigl(\mu(A_{i}) - \mu(A_{i})\mu(A_{i})\bigr) \\ &= \sum_{i = 1}^{n} \biggl(\mu \biggl( \bigcup_{j \in I_{i}}B_{j}\biggr) - \mu \biggl( \bigcup_{j \in I_{i}}B_{j}\biggr)\mu\biggl( \bigcup_{j \in I_{i}}B_{j}\biggr)\biggr) \\ &= \sum_{i = 1}^{n} \biggl( \sum _{j \in I_{i}} \mu(B_{j}) - \sum _{j \in I_{i}} \mu(B_{j}) \sum _{j \in I_{i}} \mu(B_{j}) \biggr). \end{aligned} $$

As a consequence of the inequality \(( x + y )^{2} \ge x^{2} + y^{2}\) which holds for all nonnegative real numbers x, y, we get

$$\sum_{j \in I_{i}} \mu(B_{j}) \sum _{j \in I_{i}} \mu(B_{j}) \ge\sum _{j \in I_{i}} \bigl(\mu(B_{j})\bigr)^{2} $$

for \(i = 1,2,\ldots,n\). Hence

$$\begin{aligned} H_{L}(\alpha) &\le\sum _{i = 1}^{n} \biggl( \sum_{j \in I_{i}} \mu(B_{j}) - \sum_{j \in I_{i}} \bigl(\mu(B_{j})\bigr)^{2} \biggr) \\ &= \sum_{i = 1}^{n} \sum _{j \in I_{i}} \bigl( \mu(B_{j}) - \bigl( \mu(B_{j})\bigr)^{2} \bigr) \\ &= \sum_{j = 1}^{m} \mu(B_{j}) \bigl(1 - \mu(B_{j})\bigr) = H_{L}( \beta). \end{aligned} $$

Since \(\alpha\prec\alpha\vee\beta\) and \(\beta\prec\alpha\vee \beta\), the inequality (ii) is a simple consequence of the property (i). □

Definition 3.2

If \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) and \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\) are two partitions of Ω, then the logical conditional entropy of α assuming a realization of the experiment β is defined as the number

$$H_{L} ( \alpha/\beta )= \sum_{i = 1}^{n} \sum_{j = 1}^{m} \mu (A_{i} \cap B_{j}) \bigl(\mu(B_{j}) - \mu(A_{i} \cap B_{j})\bigr). $$

Remark 3.3

The monotonicity of probability measure μ implies the inequality \(\mu(A_{i} \cap B_{j}) \le\mu(B_{j})\), so it is evident that the logical conditional entropy \(H_{L} ( \alpha/\beta )\) is a nonnegative number.

Proposition 3.1

Let \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) be a partition of Ω. Then, for any \(B \in S\), we have \(\sum_{i = 1}^{n} \mu (A_{i} \cap B) = \mu(B)\).

Proof

Since \((A_{i} \cap B) \cap(A_{j} \cap B) = \emptyset\) whenever \(i \ne j\), by the additivity of probability measure μ we get

$$\mu(B) = \mu(\Omega\cap B) = \mu \Biggl( \Biggl( \bigcup _{i = 1}^{n}A_{i}\Biggr) \cap B \Biggr) = \mu \Biggl( \bigcup_{i = 1}^{n}(A_{i} \cap B) \Biggr) = \sum_{i = 1}^{n} \mu ( A_{i} \cap B ). $$

 □

Remark 3.4

Let \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) and \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\) be two partitions of Ω. Since by Proposition 3.1, for \(j = 1,2,\ldots,m\), \(\sum_{i = 1}^{n} \mu(A_{i} \cap B_{j}) = \mu(B_{j})\), we can also write:

$$H_{L} ( \alpha/\beta )= \sum_{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2} - \sum _{i = 1}^{n} \sum_{j = 1}^{m} \bigl( \mu(A_{i} \cap B_{j}) \bigr)^{2}. $$

Theorem 3.2

Let α and β be two partitions of Ω Then

$$H_{L} ( \alpha/\beta ) = H_{L} ( \alpha\vee\beta ) - H_{L}(\beta). $$

Proof

Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\). Let us calculate

$$\begin{aligned} H_{L}(\beta) + H_{L} ( \alpha/ \beta ) &=1 -\sum_{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2}+ \sum_{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2} - \sum _{i = 1}^{n} \sum_{j = 1}^{m} \bigl( \mu(A_{i} \cap B_{j}) \bigr)^{2} \\ &= 1 - \sum_{i = 1}^{n} \sum _{j = 1}^{m} \bigl( \mu(A_{i} \cap B_{j}) \bigr)^{2} = H_{L}(\alpha\vee\beta). \end{aligned} $$

 □

Remark 3.5

As a simple consequence of the previous theorem we get

$$ H_{L}(\alpha\vee\beta) = H_{L}(\alpha) + H_{L}( \beta/\alpha ). $$
(3.2)

Based on the principle of mathematical induction, we get the following generalization of Eq. (3.2):

$$H_{L}(\alpha_{1} \vee\alpha_{2} \vee\cdots \vee \alpha_{n}) = H_{L}(\alpha_{1}) + \sum _{i = 2}^{n} H_{L}(\alpha_{i}/ \alpha_{1} \vee\cdots \vee\alpha_{i - 1}) $$

for every partitions \(\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\) of Ω. If we put \(n = 3\), then we have

$$H_{L}(\alpha_{1} \vee\alpha_{2} \vee \alpha_{3}) = H_{L}(\alpha_{1}) + H_{L}(\alpha_{2}/\alpha_{1}) + H_{L}( \alpha_{3}/\alpha_{1} \vee \alpha_{2}) $$

for every partitions \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) of Ω.

Theorem 3.3

Let α and β be two partitions of Ω. Then

  1. (i)

    \(H_{L} ( \alpha/\beta ) \le H_{L} ( \alpha )\);

  2. (ii)

    \(H_{L} ( \alpha\vee\beta ) \le H_{L} ( \alpha ) + H_{L} ( \beta )\).

Proof

Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\).

(i) For each \(i=1,2,\ldots,n\) we have

$$\begin{gathered} \sum_{j = 1}^{m} \mu(A_{i} \cap B_{j} ) \bigl(\mu(B_{j}) - \mu(A_{i} \cap B_{j})\bigr) \\ \quad\le\Biggl(\sum_{j = 1}^{m} \mu(A_{i} \cap B_{j} )\Biggr) \Biggl(\sum _{j = 1}^{m} \bigl(\mu (B_{j}) - \mu(A_{i} \cap B_{j})\bigr)\Biggr) \\ \quad= \mu(A_{i}) \Biggl(\sum _{j = 1}^{m} \bigl(\mu(B_{j}) - \mu(A_{i} \cap B_{j})\bigr)\Biggr) \\ \quad= \mu(A_{i}) \Biggl(1 - \sum _{j = 1}^{m} \mu(A_{i} \cap B_{j}) \Biggr) \\ \quad= \mu(A_{i}) \bigl(1 - \mu(A_{i})\bigr). \end{gathered} $$

Therefore we get

$$\begin{aligned} H_{L} ( \alpha/\beta )&= \sum _{i = 1}^{n} \sum_{j = 1}^{m} \mu (A_{i} \cap B_{j}) \bigl(\mu(B_{j}) - \mu(A_{i} \cap B_{j})\bigr) \\ &\le\sum_{i = 1}^{n} \mu(A_{i}) \bigl(1 - \mu(A_{i})\bigr) = H_{L} ( \alpha ). \end{aligned} $$

(ii) By Eq. (3.2) and the previous part of this theorem, we get

$$H_{L}(\alpha\vee\beta) = H_{L} ( \alpha ) + H_{L}(\beta /\alpha) \le H_{L} ( \alpha ) + H_{L}(\beta). $$

 □

Theorem 3.4

Let α, β and γ be partitions of Ω. Then

$$H_{L}(\alpha\vee\beta/\gamma) = H_{L} ( \alpha/\gamma ) + H_{L}(\beta/\alpha\vee\gamma). $$

Proof

Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\), \(\gamma= \{ C_{1},C_{2},\ldots,C_{r} \}\). Let us calculate

$$\begin{aligned} H_{L} ( \alpha/\gamma ) + H_{L}(\beta/\alpha\vee\gamma)={}& \sum_{k = 1}^{r} \bigl( \mu(C_{k}) \bigr)^{2} - \sum _{i = 1}^{n} \sum_{k = 1}^{r} \bigl( \mu(A_{i} \cap C_{k}) \bigr)^{2} \\ &+ \sum_{i = 1}^{n} \sum _{k = 1}^{r} \bigl( \mu(A_{i} \cap C_{k}) \bigr)^{2} - \sum_{j = 1}^{m} \sum_{i = 1}^{n} \sum _{k = 1}^{r} \bigl( \mu (B_{j} \cap A_{i} \cap C_{k}) \bigr)^{2} \\ ={}& \sum_{k = 1}^{r} \bigl( \mu(C_{k}) \bigr)^{2} - \sum_{i = 1}^{n} \sum_{j = 1}^{m} \sum _{k = 1}^{r} \bigl( \mu(A_{i} \cap B_{j} \cap C_{k}) \bigr)^{2}\\ ={}& H_{L}( \alpha\vee\beta/\gamma). \end{aligned} $$

 □

Using the principle of mathematical induction, we get the following generalization of Theorem 3.4.

Theorem 3.5

Let \(\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\), β be partitions of Ω. Then

$$H_{L}(\alpha_{1} \vee\alpha_{2} \vee\cdots \vee \alpha_{n}/\beta) = H_{L}(\alpha_{1}/\beta) + \sum _{i = 2}^{n} H_{L}( \alpha_{i}/\alpha_{1} \vee\cdots \vee\alpha_{i - 1} \vee \beta). $$

Proposition 3.2

Let α, β, γ be partitions of Ω. Then

  1. (i)

    \(\alpha\prec\beta\) implies \(\alpha\vee\gamma \prec\beta \vee\gamma\);

  2. (ii)

    \(\alpha\approx\beta\) implies \(\alpha\vee\gamma \approx \beta\vee\gamma\).

Proof

Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\), \(\gamma= \{ C_{1},C_{2},\ldots,C_{r} \}\).

(i) Let \(\alpha\prec\beta\), i.e., there exists a partition \(I_{1},I_{2},\ldots,I_{n}\) of the set \(\{1,2,\ldots,m\}\) such that \(A_{i} = \bigcup_{j \in I_{i}}B_{j}\). A partition \(\alpha\vee \gamma= \{ A_{i} \cap C_{k}; i = 1,2,\ldots,n, k = 1,2,\ldots,r \}\) is indexed by \(\{ (i,k); i = 1,2,\ldots,n, k = 1,2,\ldots,r \}\), hence we put \(I_{i, k} = \{ (j, k); j \in I_{i} \}\), for \(i = 1,2,\ldots,n\), \(k = 1,2,\ldots,r\). We get

$$A_{i} \cap C_{k}= \biggl( \bigcup _{j \in I_{i}}B_{j}\biggr) \cap C_{k} = \bigcup _{j \in I_{i}}(B_{j} \cap C_{k}) = \bigcup_{(j, t) \in I_{i, k}}(B_{j} \cap C_{t}), $$

for \(i = 1,2,\ldots,n\), \(k = 1,2,\ldots,r\). Therefore \(\alpha\vee\gamma\prec \beta\vee\gamma\).

(ii) Let \(\alpha\approx\beta\), i.e., \(\alpha\subset^{ \circ} \beta\) and \(\beta\subset^{ \circ} \alpha\). From the relation \(\alpha\subset^{ \circ} \beta\) it follows that for each \(A_{i} \in\alpha\) there exists \(B_{j} \in\beta\) such that \(\mu(A_{i} - B_{j}) = \mu(B_{j} - A_{i}) = 0\). Hence for each \(A_{i} \cap C_{k} \in\alpha\vee\gamma\) there exists \(B_{j} \cap C_{k} \in\beta\vee\gamma\) such that

$$\begin{aligned} \mu\bigl((A_{i} \cap C_{k}) - (B_{j} \cap C_{k})\bigr) &= \mu\bigl(A_{i} \cap C_{k} \cap (B_{j} \cap C_{k})^{C} \bigr) \\ &= \mu\bigl(A_{i} \cap C_{k} \cap \bigl(B_{j}^{C} \cup C_{k}^{C}\bigr) \bigr)\\ & = \mu\bigl(\bigl(A_{i} \cap C_{k} \cap B_{j}^{C}\bigr) \cup\bigl(A_{i} \cap C_{k} \cap C_{k}^{C}\bigr)\bigr) \\ &\le\mu\bigl(A_{i} \cap C_{k} \cap B_{j}^{C}\bigr) + \mu\bigl(A_{i} \cap C_{k} \cap C_{k}^{C}\bigr) \\ &\le\mu\bigl(A_{i} \cap B_{j}^{C} \bigr)\\ & = \mu(A_{i} - B_{j}) = 0, \end{aligned} $$

and

$$\begin{aligned} \mu\bigl((B_{j} \cap C_{k}) - (A_{i} \cap C_{k})\bigr) &= \mu\bigl(B_{j} \cap C_{k} \cap (A_{i} \cap C_{k})^{C} \bigr) \\ &= \mu\bigl(B_{j} \cap C_{k} \cap \bigl(A_{i}^{C} \cup C_{k}^{C}\bigr) \bigr)\\& = \mu\bigl(\bigl(B_{j} \cap C_{k} \cap A_{i}^{C}\bigr) \cup\bigl(B_{j} \cap C_{k} \cap C_{k}^{C}\bigr)\bigr) \\ &\le\mu\bigl(B_{j} \cap C_{k} \cap A_{i}^{C}\bigr) + \mu\bigl(B_{j} \cap C_{k} \cap C_{k}^{C}\bigr) \\ &\le\mu\bigl(B_{j} \cap A_{i}^{C} \bigr)\\& = \mu(B_{j} - A_{i}) = 0. \end{aligned} $$

Hence for each \(A_{i} \cap C_{k} \in\alpha\vee\gamma\) there exists \(B_{j} \cap C_{k} \in\beta\vee\gamma\) such that

$$\mu\bigl((A_{i} \cap C_{k}) - (B_{j} \cap C_{k})\bigr) = \mu\bigl((B_{j} \cap C_{k}) - (A_{i} \cap C_{k})\bigr) = 0, $$

i.e.,

$$\mu\bigl((A_{i} \cap C_{k})\Delta(B_{j} \cap C_{k})\bigr) = 0. $$

However, this indicates that \(\alpha\vee\gamma\subset^{ \circ} \beta \vee\gamma\). Analogously we see that the relation \(\beta\subset^{ \circ} \alpha\) implies \(\beta\vee\gamma\subset^{ \circ} \alpha\vee\gamma\). Thus \(\alpha\vee\gamma\approx\beta\vee\gamma\). □

Theorem 3.6

Let α, β and γ be partitions of Ω. Then

$$\alpha\prec\beta\quad\textit{implies}\quad H_{L} ( \alpha /\gamma ) \le H_{L} ( \beta/\gamma ). $$

Proof

Let \(\alpha\prec\beta\). Since by Proposition 3.2 we have \(\alpha\vee\gamma\prec\beta\vee\gamma\), according to Theorems 3.2 and 3.1 we get

$$H_{L} ( \alpha/\gamma ) = H_{L} ( \alpha\vee\gamma ) - H_{L} ( \gamma ) \le H_{L} ( \beta\vee\gamma ) - H_{L} ( \gamma ) = H_{L} ( \beta/\gamma ). $$

 □

Theorem 3.7

Let α, β and γ be partitions of Ω. Then

  1. (i)

    \(\alpha\subset^{ \circ} \beta\) if and only if \(H_{L} ( \alpha/\beta ) = 0\);

  2. (ii)

    \(\alpha\approx\beta\) implies \(H_{L} ( \alpha ) = H_{L} ( \beta )\);

  3. (iii)

    \(\alpha\approx\beta\) implies \(H_{L} ( \alpha /\gamma ) = H_{L} ( \beta/\gamma )\);

  4. (iv)

    \(\alpha\approx\beta\) implies \(H_{L} ( \gamma /\alpha ) = H_{L} ( \gamma/\beta )\).

Proof

Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\), \(\gamma= \{ C_{1},C_{2},\ldots,C_{r} \}\).

(i) The property is a direct consequence of the equivalence \(\alpha \subset^{ \circ} \beta\Leftrightarrow\mu(A_{i} \cap B_{j}) = \mu (B_{j})\) or \(\mu(A_{i} \cap B_{j}) = 0\), for \(i = 1,2,\ldots,n\) and \(j = 1,2,\ldots,m\), proved in [19] (Theorem 4.4).

(ii) By the part (i) the assumption \(\alpha\approx\beta\) implies the equalities \(H_{L} ( \alpha/\beta ) = H_{L} ( \beta /\alpha ) = 0\). Therefore, using Eq. (3.2), we get

$$H_{L}(\alpha\vee\beta) - H_{L}(\beta) = H_{L}( \beta\vee\alpha) - H_{L}(\alpha), $$

and consequently

$$H_{L} ( \alpha ) = H_{L} ( \beta ). $$

(iii) Let \(\alpha\approx\beta\). Then, by Proposition 3.2 \(\alpha \vee\gamma\approx\beta\vee\gamma\), hence by the part (ii) of this theorem we have \(H_{L} ( \alpha\vee\gamma ) = H_{L} ( \beta \vee\gamma )\). Hence by Eq. (3.2) we get

$$H_{L} ( \alpha/\gamma ) = H_{L} ( \alpha\vee\gamma ) - H_{L} ( \gamma ) = H_{L} ( \beta\vee\gamma ) - H_{L} ( \gamma ) = H_{L} ( \beta/\gamma ). $$

(iv) Let \(\alpha\approx\beta\). Then by the part (ii) of this theorem we have \(H_{L} ( \alpha ) = H_{L} ( \beta )\). Moreover, by Proposition 3.2 \(\alpha\vee\gamma\approx\beta\vee\gamma\), hence by the part (ii) of this theorem we get \(H_{L} ( \alpha\vee\gamma ) = H_{L} ( \beta\vee\gamma )\). Therefore according to Theorem 3.2 we can write

$$H_{L} ( \gamma/\alpha ) = H_{L} ( \gamma\vee\alpha ) - H_{L} ( \alpha ) = H_{L} ( \beta\vee\gamma ) - H_{L} ( \beta ) = H_{L} ( \gamma/\beta ). $$

 □

Note that in the previous theorem (the parts (ii), (iii), (iv)) it is proved that the logical entropy and conditional logical entropy are invariant under the relation ≈.

In the following, some illustrative numerical examples are provided.

Example 3.1

Consider the probability space \((\Omega, S, \mu)\), where Ω is the unit interval \([ 0, 1 ]\), S is the σ-algebra of all Borel subsets of the unit interval \([ 0, 1 ]\), and the mapping \(\mu:S \to [ 0, 1 ]\) is the Lebesgue measure, i.e., \(\mu ( [ x,y ] ) = y - x\) for any \(x,y \in [ 0, 1 ]\), \(x < y\). Evidently, the collections \(\alpha= \{ [ 0,\frac{1}{3} ), [ \frac{1}{3},\frac{2}{3} ), [ \frac{2}{3},1 ] \}\), and \(\beta= \{ [ 0,\frac{1}{4} ), [ \frac{1}{4},\frac{1}{2} ), [ \frac{1}{2},\frac{4}{5} ), [ \frac{4}{5},1 ] \}\) are two measurable partitions of Ω. The join of partitions α, β is the collection \(\alpha\vee \beta= \{ [ 0,\frac{1}{4} ), [ \frac{1}{4},\frac{1}{3} ), [ \frac{1}{3},\frac{1}{2} ), [ \frac{1}{2},\frac{2}{3} ), [ \frac{2}{3},\frac{4}{5} ), [ \frac{4}{5},1 ] \}\). By simple calculations we get the logical entropy of these partitions:

$$\begin{gathered} H_{L}(\alpha) =1 - \sum _{i = 1}^{3} \bigl( \mu(A_{i}) \bigr)^{2} = 1 - \biggl( \biggl( \frac{1}{3} \biggr)^{2} + \biggl( \frac{1}{3} \biggr)^{2} + \biggl( \frac{1}{3} \biggr)^{2} \biggr) = \frac{2}{3}; \\ H_{L}(\beta) =1 - \sum_{i = 1}^{4} \bigl( \mu(B_{i}) \bigr)^{2} = 1 - \biggl( \biggl( \frac{1}{4} \biggr)^{2} + \biggl( \frac{1}{4} \biggr)^{2} + \biggl( \frac{3}{10} \biggr)^{2} + \biggl( \frac{2}{10} \biggr)^{2} \biggr) = \frac{149}{200}; \\ H_{L}(\alpha\vee\beta)= 1 - \biggl( \biggl( \frac{1}{4} \biggr)^{2} + \biggl( \frac{1}{12} \biggr)^{2} + \biggl( \frac{1}{6} \biggr)^{2} + \biggl( \frac{1}{6} \biggr)^{2} + \biggl( \frac{2}{15} \biggr)^{2} + \biggl( \frac{1}{5} \biggr)^{2} \biggr) = \frac{1471}{1800}. \end{gathered} $$

The conditional logical entropy of α assuming β is the number:

$$\begin{aligned} H_{L} ( \alpha/\beta )={}& \sum _{j = 1}^{4} \bigl( \mu(B_{j}) \bigr)^{2} - \sum_{i = 1}^{3} \sum _{j = 1}^{4} \bigl( \mu(A_{i} \cap B_{j}) \bigr)^{2} \\ ={}& \biggl( \frac{1}{4} \biggr)^{2} + \biggl( \frac{1}{4} \biggr)^{2} + \biggl( \frac{3}{10} \biggr)^{2} + \biggl( \frac{2}{10} \biggr)^{2} \\ & - \biggl( \biggl( \frac{1}{4} \biggr)^{2} + \biggl( \frac{1}{12} \biggr)^{2} + \biggl( \frac{1}{6} \biggr)^{2} + \biggl( \frac{1}{6} \biggr)^{2} + \biggl( \frac{2}{15} \biggr)^{2} + \biggl( \frac{1}{5} \biggr)^{2} \biggr) \\ ={}& \frac{13}{180}. \end{aligned} $$

Now it is possible to verify that the equality \(H_{L}(\alpha\vee\beta ) = H_{L} ( \beta ) + H_{L} ( \alpha/\beta )\) holds.

Example 3.2

Consider the probability space \((\Omega, S, \mu)\), and the measurable partition α of Ω from the previous example. If we put \(\beta= \{ A_{i} \cap Q, A_{i} \cap Q^{C}, i = 1,2,\ldots,n \}\), where Q is the set of all rational numbers in the real line \(R^{1}\), and \(Q^{C}\)denotes the complement of Q, then β is a measurable partition of Ω. For \(i = 1,2,\ldots,n\), we have

$$\mu \bigl( A_{i} \cap\bigl(A_{i} \cap Q^{C} \bigr) \bigr) = \mu \bigl( A_{i} \cap Q^{C} \bigr), $$

and

$$\mu \bigl( A_{i} \cap(A_{i} \cap Q) \bigr) = \mu ( A_{i} \cap Q ). $$

For every \(i \ne j\), we get

$$\mu \bigl( A_{i} \cap\bigl(A_{j} \cap Q^{C} \bigr) \bigr) = \mu \bigl( A_{i} \cap A_{j} \cap Q^{C} \bigr) = \mu(\emptyset) = 0. $$

Therefore we conclude that, for every \(A \in\alpha\), \(B \in\beta\), \(\mu ( A \cap B ) = \mu ( B )\) or \(\mu ( A \cap B ) = 0\). But this means that \(\alpha\subset^{ \circ} \beta\). Hence we get

$$\begin{aligned} H_{L} ( \alpha/\beta )&= \sum _{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2} - \sum_{i = 1}^{n} \sum _{j = 1}^{m} \bigl( \mu(A_{i} \cap B_{j}) \bigr)^{2} \\ &= \sum_{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2} - \sum_{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2} = 0. \end{aligned} $$

In the same way we see that \(\beta\subset^{ \circ} \alpha\), and \(H_{L} ( \beta/\alpha ) = 0\). Since \(\alpha\subset^{ \circ} \beta\) and \(\beta\subset^{ \circ} \alpha\), \(\alpha\approx\beta\). Evidently, \(H_{L} ( \alpha ) = H_{L} ( \beta )\).

Theorem 3.8

If partitions \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) and \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\) of Ω are statistically independent, i.e., \(\mu(A_{i} \cap B_{j}) = \mu(A_{i}) \cdot\mu (B_{j})\), for \(i = 1,2,\ldots,n\) and \(j = 1,2,\ldots,m\), then

$$1 - H_{L} ( \alpha\vee\beta ) = \bigl(1 - H_{L}(\alpha) \bigr) \cdot \bigl(1 - H_{L}(\beta)\bigr). $$

Proof

Let us calculate

$$\begin{aligned} 1 - H_{L} ( \alpha\vee\beta ) &= \sum _{i = 1}^{n} \sum_{j = 1}^{m} \bigl( \mu(A_{i} \cap B_{j}) \bigr)^{2} \\ &= \sum_{i = 1}^{n} \sum _{j = 1}^{m} \bigl( \mu(A_{i}) \cdot \mu(B_{j}) \bigr)^{2}\\& = \sum_{i = 1}^{n} \bigl( \mu(A_{i}) \bigr)^{2} \sum _{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2} \\ &= \bigl(1 - H_{L}(\alpha)\bigr) \cdot\bigl(1 - H_{L}( \beta)\bigr). \end{aligned} $$

 □

Remark 3.6

By contrast, in the case of Shannon’s entropy, the additivity applies: for statistically independent partitions α, β of Ω

$$H_{S} ( \alpha\vee\beta ) = H_{S} ( \alpha ) + H_{S} ( \beta ). $$

4 Logical entropy of dynamical systems

In this section using the concept of logical entropy of measurable partitions we define the logical entropy of dynamical systems. Recall that a dynamical system in the sense of classical probability theory [19] is a quadruple \((\Omega,S,\mu,T)\), where \((\Omega, S, \mu)\) is a probability space and \(T: \Omega\to\Omega\) is a measure preserving transformation (i.e., \(E \in S\) implies \(T^{ - 1}(E) \in S\) and \(\mu(T^{ - 1}(E)) = \mu (E)\)). If \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) is a partition of Ω, then by \(T^{ - 1}(\alpha)\) the partition \(\{ T^{ - 1}(A_{1}), T^{ - 1}(A_{2}),\ldots,T^{ - 1}(A_{n}) \}\) is denoted. By Eq. (3.1) the logical entropy of partition \(\bigvee_{i = 0}^{n - 1}T^{ - i}(\alpha ) = \alpha\vee T^{ - 1}(\alpha) \vee\cdots \vee T^{ - (n - 1)}(\alpha)\) can also be found.

We note that fuzzy generalizations of the notion of a dynamical system have been introduced and studied e.g., in [2026]. Notice that while in the papers [2022, 26] the authors deal with the Shannon and Kolmogorov–Sinai entropy of fuzzy dynamical systems, in the paper [23] the logical entropy of fuzzy dynamical systems has been studied. We remark that some of the results of the article [23] are fuzzy generalizations of the results provided in Sects. 3 and 4 of the present paper.

Proposition 4.1

Let any dynamical system \((\Omega,S,\mu,T)\) be given. If α, β are partitions of Ω, then

  1. (i)

    \(T^{ - 1}(\alpha\vee\beta) = T^{ - 1}(\alpha) \vee T^{ - 1}(\beta )\);

  2. (ii)

    \(\alpha\prec\beta\) implies \(T^{ - 1}(\alpha) \prec T^{ - 1}(\beta)\).

Proof

Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta= \{ B_{1},B_{2},\ldots,B_{m} \}\). The property (i) follows from the equality \(T^{ - 1}(A_{i} \cap B_{j}) = T^{ - 1}(A_{i}) \cap T^{ - 1}(B_{j})\), \(i = 1,2,\ldots,n\), \(j = 1,2,\ldots,m\).

(ii) Assume that \(\alpha\prec\beta\). Then for each \(A_{i} \in\alpha\) there exists a subset \(I_{i} \subset \{ 1,2,\ldots,m \}\) such that \(A_{i} = \bigcup_{j \in I_{i}}B_{j}\), \(I_{i} \cap I_{j} = \emptyset\) for \(i \ne j\), and \(\bigcup_{i = 1}^{n}I_{i} = \{ 1,2,\ldots,m \}\). Therefore we can write

$$T^{ - 1}(A_{i}) = T^{ - 1}\biggl( \bigcup _{j \in I_{i}}B_{j}\biggr) = \bigcup _{j \in I_{i}}T^{ - 1}(B_{j}),\quad i = 1,2,\ldots,n. $$

However, this means that \(T^{ - 1}(\alpha) \prec T^{ - 1}(\beta)\). □

Theorem 4.1

Let any dynamical system \((\Omega,S,\mu,T)\) be given. If α, β are partitions of Ω, then the following properties are satisfied:

  1. (i)

    \(H_{L}(T^{ - r}(\alpha)) = H_{L}(\alpha)\), \(r = 0,1,2,\ldots \);

  2. (ii)

    \(H_{L}(T^{ - r}(\alpha)/T^{ - r}(\beta)) = H_{L}(\alpha /\beta )\), \(r = 0,1,2,\ldots \);

  3. (iii)

    \(H_{L}( \bigvee_{i = 0}^{n - 1}T^{ - i}(\alpha)) = H_{L}(\alpha) + \sum_{j = 1}^{n - 1} H_{L}(\alpha/ \bigvee_{i = 1}^{j}T^{ - i}(\alpha))\).

Proof

(i) Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\). Since \(\mu ( T^{ - r}(A_{i}) ) = \mu(A_{i})\), for \(i = 1,2,\ldots,n\), and \(r = 0,1,2,\ldots\) , we get

$$H_{L}\bigl(T^{ - r}(\alpha)\bigr) = 1 - \sum _{i = 1}^{n} \bigl( \mu\bigl(T^{ - r}(A_{i}) \bigr) \bigr)^{2} = 1 - \sum_{i = 1}^{n} \bigl( \mu(A_{i}) \bigr)^{2} = H_{L}(\alpha). $$

(ii) Assume that \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\), \(\beta = \{ B_{1},B_{2},\ldots,B_{m} \}\). Then

$$\begin{aligned} H_{L}\bigl(T^{ - r}( \alpha)/T^{ - r}(\beta)\bigr) &= \sum_{j = 1}^{m} \bigl( \mu \bigl( T^{ - r}(B_{j}) \bigr) \bigr)^{2} - \sum_{i = 1}^{n} \sum _{j = 1}^{m} \bigl( \mu \bigl( T^{ - r}(A_{i} \cap B_{j}) \bigr) \bigr)^{2} \\ &= \sum_{j = 1}^{m} \bigl( \mu(B_{j}) \bigr)^{2} - \sum_{i = 1}^{n} \sum_{j = 1}^{m} \bigl( \mu(A_{i} \cap B_{j}) \bigr)^{2}\\& = H_{L}(\alpha/\beta ). \end{aligned} $$

(iii) We will prove by mathematical induction. For the case of \(n =2\), the equality (iii) is a simple consequence of Eq. (3.2). We assume that the statement holds for a given \(n \in\mathrm{N}\) and we prove it is true for \(n + 1\). By the property (i) we have

$$H_{L}\Biggl( \bigvee_{i = 1}^{n}T^{ - i}( \alpha)\Biggr) = H_{L}\Biggl(T^{ - 1}\Biggl( \bigvee _{i = 0}^{n - 1}T^{ - i}(\alpha)\Biggr)\Biggr) = H_{L}\Biggl( \bigvee_{i = 0}^{n - 1}T^{ - i}( \alpha)\Biggr). $$

Therefore using Eq. (3.2) and the induction assumption we get

$$\begin{aligned} H_{L}\Biggl( \bigvee _{i = 0}^{n}T^{ - i}(\alpha)\Biggr)& = H_{L}\Biggl(\Biggl( \bigvee_{i = 1}^{n}T^{ - i}( \alpha)\Biggr) \vee\alpha\Biggr)= H_{L}\Biggl( \bigvee _{i = 1}^{n}T^{ - i}(\alpha )\Biggr) + H_{L}\Biggl(\alpha/ \bigvee_{i = 1}^{n}T^{ - i}( \alpha)\Biggr) \\ &= H_{L}\Biggl( \bigvee_{i = 0}^{n - 1}T^{ - i}( \alpha)\Biggr) + H_{L}\Biggl(\alpha/ \bigvee _{i = 1}^{n}T^{ - i}(\alpha)\Biggr) \\ &= H_{L} ( \alpha ) + \sum_{j = 1}^{n - 1} H_{L}\Biggl(\alpha/ \bigvee_{i = 1}^{j}T^{ - i}( \alpha)\Biggr) + H_{L}\Biggl(\alpha/ \bigvee _{i = 1}^{n}T^{ - i}(\alpha )\Biggr) \\ &= H_{L} ( \alpha ) + \sum_{j = 1}^{n} H_{L}\Biggl(\alpha/ \bigvee_{i = 1}^{j}T^{ - i}( \alpha)\Biggr). \end{aligned} $$

The proof is complete. □

In the following, we define the logical entropy of a dynamical system \((\Omega,S,\mu,T)\). We will begin by defining the logical entropy of a measure preserving transformation T relative to a partition α. Later we shall remove the dependence on α to obtain the logical entropy of a dynamical system \((\Omega,S,\mu,T)\). We first need the following standard analytic lemma.

Lemma 4.1

([19], Theorem 4.9)

Let \(\{ u_{n} \}_{n = 1}^{\infty} \) be a sequence of nonnegative real numbers such that \(u_{r + s} \le u_{r} + u_{s}\) for every \(r,s \in\mathrm{N}\). Then \(\lim_{n \to\infty} \frac{1}{n}u_{n}\) exists.

Theorem 4.2

Let \((\Omega,S,\mu,T)\) be a dynamical system and α be a measurable partition of Ω. Then \(\lim_{n \to\infty} \frac{1}{n}H_{L}( \bigvee_{i = 0}^{n - 1}T^{ - i}(\alpha))\) exists.

Proof

Put \(u_{n} = H_{L}( \bigvee_{i = 0}^{n - 1}T^{ - i}(\alpha))\). Evidently, the sequence \(\{ u_{n} \}_{n = 1}^{\infty} \) is a sequence of nonnegative real numbers. We prove that \(u_{r + s} \le u_{r} + u_{s}\) for every \(r,s \in\mathrm{N}\). According to the subadditivity of logical entropy (Theorem 3.3(ii)), and the property (i) of the previous theorem, for every \(r,s \in\mathrm{N}\), we have

$$\begin{aligned} u_{r + s} &=H_{L}\Biggl( \bigvee _{i = 0}^{r + s - 1}T^{ - i}(\alpha)\Biggr)\le H_{L}\Biggl( \bigvee_{i = 0}^{r - 1}T^{ - i}( \alpha)\Biggr) + H_{L}\Biggl( \bigvee_{i = r}^{r + s - 1}T^{ - i}( \alpha)\Biggr) \\ &= u_{r} + H_{L}\Biggl( \bigvee _{i = 0}^{s - 1}T^{ - (r + i)}(\alpha)\Biggr) = u_{r} + H_{L}\Biggl(T^{ - r}\Biggl( \bigvee _{i = 0}^{s - 1}T^{ - i}(\alpha)\Biggr)\Biggr) \\ &= u_{r} + H_{L}\Biggl( \bigvee _{i = 0}^{s - 1}T^{ - i}(\alpha)\Biggr) = u_{r} + u_{s}. \end{aligned} $$

Hence by Lemma 4.1, \(\lim_{n \to\infty} \frac{1}{n}H_{L}( \bigvee_{i = 0}^{n - 1}T^{ - i}(\alpha))\) exists. □

Definition 4.1

Let \((\Omega,S,\mu,T)\) be a dynamical system and α be a measurable partition of Ω. The logical entropy of T with respect to α is defined by

$$h_{L}(T, \alpha) =\lim_{n \to\infty} \frac{1}{n}H_{L} \Biggl( \bigvee_{i = 0}^{n - 1}T^{ - i}( \alpha)\Biggr). $$

Remark 4.1

Evidently, \(h_{L}(T, \alpha) \ge0\).

Theorem 4.3

Let \((\Omega,S,\mu,T)\) be a dynamical system and α and β be measurable partitions of Ω such that \(\alpha \prec\beta\). Then \(h_{L}(T, \alpha) \le h_{L}(T, \beta)\).

Proof

The assumption \(\alpha\prec\beta\) implies the relation \(\bigvee_{i = 0}^{n - 1}T^{ - i}(\alpha) \prec \bigvee_{i = 0}^{n - 1}T^{ - i}(\beta)\), for every \(n\in\mathrm{N}\). According to the property (i) of Theorem 3.1, we get

$$H_{L} \Biggl( \bigvee_{i = 0}^{n - 1}T^{ - i}( \alpha) \Biggr) \le H_{L} \Biggl( \bigvee_{i = 0}^{n - 1}T^{ - i}( \beta) \Biggr). $$

Consequently, dividing by n and letting \(n \to\infty\), we conclude that

$$h_{L}(T,\alpha)\leq h_{L}(T,\beta). $$

 □

Definition 4.2

The logical entropy of a dynamical system \((\Omega,S,\mu,T)\) is defined by

$$h_{L} ( T ) =\sup \bigl\{ h_{L}(T, \alpha) ;\alpha \text{ is a partition of } \Omega \bigr\} . $$

Example 4.1

The system \((\Omega,S,\mu,I)\), where \(I: \Omega\to \Omega\) is the identity map, is a trivial case of a dynamical system. The operation ⋁ is idempotent, therefore, for every partition α of Ω, we have

$$h_{L}(I, \alpha) =\lim_{n \to\infty} \frac{1}{n}H_{L} \Biggl( \bigvee_{i = 0}^{n - 1}I^{ - i}( \alpha)\Biggr) =\lim_{n \to\infty} \frac{1}{n}H_{L}( \alpha) = 0. $$

Thus the logical entropy of \((\Omega,S,\mu,I)\) is the number

$$h_{L} ( I ) =\sup \bigl\{ h_{L}(I, \alpha) ;\alpha \text{ is a partition of } \Omega \bigr\} = 0. $$

Theorem 4.4

Let \((\Omega,S,\mu,T)\) be a dynamical system. Then, for any integer k > 0, \(h_{L} ( T^{k} ) = k \cdot h_{L} ( T )\).

Proof

For each partition α of Ω we have

$$\begin{aligned} h_{L}\Biggl(T^{k}, \bigvee _{i = 0}^{k - 1}T^{ - i}(\alpha)\Biggr) &= \lim_{n \to \infty} \frac{1}{n}H_{L}\Biggl( \bigvee _{j = 0}^{n - 1}\bigl(T^{k} \bigr)^{ - j}\Biggl( \bigvee_{i = 0}^{k - 1}T^{ - i}( \alpha)\Biggr)\Biggr) \\ &=\lim_{n \to\infty} \frac{1}{n}H_{L} \Biggl( \bigvee_{j = 0}^{n - 1} \bigvee _{i = 0}^{k - 1}T^{ - (kj + i)}(\alpha)\Biggr) =\lim _{n \to\infty} \frac{1}{n}H_{L}\Biggl( \bigvee _{i = 0}^{nk - 1}T^{ - i}(\alpha)\Biggr) \\ &=\lim_{n \to\infty} \frac{nk}{n} \frac{1}{nk}H_{L}\Biggl( \bigvee_{i = 0}^{nk - 1}T^{ - i}( \alpha)\Biggr) = k \cdot h_{L}(T, \alpha). \end{aligned} $$

Therefore we get

$$\begin{aligned} k \cdot h_{L} ( T ) &=k \cdot\sup \bigl\{ h_{L}(T, \alpha) ;\alpha \text{ is a partition of } \Omega \bigr\} \\ &= \sup \Biggl\{ h_{L}\Biggl(T^{k}, \bigvee _{i = 0}^{k - 1}T^{ - i}(\alpha)\Biggr) ;\alpha \text{ is a partition of } \Omega \Biggr\} \\ &\le\sup \bigl\{ h_{L}\bigl(T^{k}, \beta\bigr) ;\beta \text{ is a partition of } \Omega \bigr\} = h_{L} \bigl( T^{k} \bigr). \end{aligned} $$

On the other hand \(\alpha\prec\bigvee_{i = 0}^{k - 1}T^{ - i}(\alpha )\), and therefore

$$h_{L}\bigl(T^{k}, \alpha\bigr) \le h_{L} \Biggl(T^{k}, \bigvee_{i = 0}^{k - 1}T^{ - i}( \alpha )\Biggr) = k \cdot h_{L}(T, \alpha). $$

It follows that

$$\begin{aligned} h_{L} \bigl( T^{k} \bigr)&= \sup \bigl\{ h_{L}\bigl(T^{k}, \alpha\bigr) ;\alpha \text{ is a partition of } \Omega \bigr\} \\ &\le k \cdot\sup \bigl\{ h_{L}(T, \alpha) ;\alpha \text{ is a partition of } \Omega \bigr\} = k \cdot h_{L} ( T ). \end{aligned} $$

 □

Corollary 4.1

Let \((\Omega,S,\mu,T)\) be a dynamical system and let there exist an integer \(k > 0\) such that \(T^{k}\) is the identity map. Then \(h_{L} ( T ) = 0\).

Proof

Let \(k > 0\) be an integer such that \(T^{k} = I\). Then we have

$$h_{L} ( T ) = \frac{1}{k}h_{L} \bigl( T^{k} \bigr) = \frac{1}{k}h_{L} ( I ) = 0. $$

 □

Theorem 4.5

Let \((\Omega,S,\mu,T)\) be a dynamical system and α be a measurable partition of Ω. Then, for any integer \(k > 0\),

$$h_{L}(T, \alpha) = h_{L}\Biggl(T, \bigvee _{i = 0}^{k}T^{ - i}(\alpha)\Biggr). $$

Proof

Let α be a measurable partition of Ω. Then, for any integer \(k > 0\), we have

$$\begin{aligned} h_{L} \Biggl( T, \bigvee _{i = 0}^{k}T^{ - i}(\alpha) \Biggr) &=\lim _{n \to \infty} \frac{1}{n}H_{L}\Biggl( \bigvee _{j = 0}^{n - 1}T^{ - j}\Biggl( \bigvee _{i = 0}^{k}T^{ - i}(\alpha)\Biggr)\Biggr) \\ &=\lim_{n \to\infty} \frac{k + n}{n} \cdot \frac{1}{k + n}H_{L}\Biggl( \bigvee_{t = 0}^{k + n - 1}T^{ - t}( \alpha)\Biggr) \\&=\lim_{n \to\infty} \frac{1}{k + n}H_{L}\Biggl( \bigvee _{t = 0}^{k + n - 1}T^{ - t}(\alpha)\Biggr) = h_{L} ( T,\alpha ).\end{aligned} $$

 □

In the following part, we prove that two metrically isomorphic dynamical systems have the same logical entropy. First recall the definition of what it means that two dynamical systems are metrically isomorphic.

Definition 4.3

We say that two dynamical systems \((\Omega_{1},S_{1},\mu_{1},T_{1})\), \((\Omega_{2},S_{2},\mu_{2},T_{2})\) are metrically isomorphic if there exist \(X_{1} \in S_{1}\) and \(X_{2} \in S_{2}\) such that

  1. (i)

    \(T_{1}X_{1} \subset X_{1}\), \(T_{2}X_{2} \subset X_{2}\);

  2. (ii)

    \(\mu_{1}(X_{1}) = 1\), \(\mu_{2}(X_{2}) = 1\),

and there exists a bijective map \(\psi:X_{1} \to X_{2}\) such that

  1. (iii)

    ψ, \(\psi^{ - 1}\) are measure preserving;

  2. (iv)

    \(\psi\circ T_{1} = T_{2} \circ\psi\).

Theorem 4.6

If dynamical systems \((\Omega_{1},S_{1},\mu_{1},T_{1})\), \((\Omega_{2},S_{2},\mu_{2},T_{2})\) are metrically isomorphic, then \(h_{L} ( T_{1} ) = h_{L} ( T_{2} )\).

Proof

Let \(X_{1} \subset\Omega_{1}\), \(X_{2} \subset\Omega_{2}\) and \(\psi:X_{1} \to X_{2}\) be as in the previous definition. If \(\alpha= \{ A_{1},A_{2},\ldots,A_{n} \}\) is a measurable partition of \(\Omega_{2}\), then (changing it on a set of measure zero if necessary) it is also a measurable partition of \(X_{2}\). The inverse image \(\psi^{ - 1}\alpha= \{ \psi^{ - 1}(A_{i}); A_{i} \in\alpha \}\) is a measurable partition of \(X_{1}\) and hence of \(\Omega_{1}\). Moreover,

$$\begin{aligned} H_{L}\bigl(\psi^{ - 1}\alpha\bigr) &=\sum_{i = 1}^{n} \mu_{1}\bigl( \psi^{ - 1}(A_{i})\bigr) \bigl(1 - \mu_{1}\bigl( \psi^{ - 1}(A_{i})\bigr)\bigr) \\ &= \sum_{i = 1}^{n} \mu_{2}(A_{i}) \bigl(1 - \mu_{2}(A_{i}) \bigr) = H_{L}(\alpha). \end{aligned} $$

Hence we can write

$$H_{L} \Biggl( \bigvee_{i = 0}^{n - 1}T_{1}^{ - i} \bigl(\psi^{ - 1}\alpha\bigr) \Biggr) = H_{L} \Biggl( \psi^{ - 1} \bigvee_{i = 0}^{n - 1}T_{2}^{ - i}( \alpha) \Biggr) = H_{L} \Biggl( \bigvee_{i = 0}^{n - 1}T_{2}^{ - i}( \alpha) \Biggr). $$

Therefore, dividing by n and letting \(n \to\infty\), we get

$$h_{L}(T_{2}, \alpha) = h_{L} \bigl(T_{1}, \psi^{ - 1}\alpha\bigr). $$

Thus

$$\bigl\{ h_{L}(T_{2}, \alpha); \alpha\text{ is a partition of } \Omega_{2} \bigr\} \subset\bigl\{ h_{L}(T_{1}, \beta); \beta\text{ is a partition of }\Omega_{1} \bigr\} , $$

and consequently

$$\begin{aligned} h_{L}(T_{2})&= \sup\bigl\{ h_{L}(T_{2}, \alpha); \alpha\text{ is a partition of } \Omega_{2} \bigr\} \\ &\le\sup\bigl\{ h_{L}(T_{1}, \beta); \beta\text{ is a partition of } \Omega_{1} \bigr\} = h_{L}(T_{1}). \end{aligned} $$

By symmetry, we also have \(h_{L} ( T_{1} ) \le h_{L} ( T_{2} )\). The proof is completed. □

Remark 4.2

From Theorem 4.6 it follows that if \(h_{L} ( T_{1} ) \ne h_{L} ( T_{2} )\), then the corresponding dynamical systems \((\Omega_{1},S_{1},\mu_{1},T_{1})\), \((\Omega_{2},S_{2},\mu_{2},T_{2})\) are metrically non-isomorphic. This means that the logical entropy distinguishes metrically non-isomorphic dynamical systems; so we have acquired an alternative tool for distinguishing non-isomorphic dynamical systems. This result is illustrated in the following example.

Example 4.2

Consider the probability space \((\Omega, S, \mu)\), where Ω is the unit interval \([ 0, 1 ]\), S is the σ-algebra of all Borel subsets of the unit interval \([ 0, 1 ]\), and \(\mu:S \to [ 0, 1 ]\) is the Lebesgue measure, i.e., \(\mu ( [ x,y ] ) = y - x\) for any \(x,y \in [ 0, 1 ]\), \(x < y\). Let \(c \in ( 0, 1 )\), and the mapping \(T_{c}: [ 0, 1 ] \to [ 0, 1 ]\) is defined by the formula \(T_{c}(x) = x + c\) (mod 1). The logical entropy distinguishes metrically non-isomorphic dynamical systems \((\Omega, S, \mu, T_{c})\) for different c. Namely, \(h_{L}(T_{c}) = 0\), if \(c = 1 / 2\), but \(h_{L}(T_{c}) > 0\) for \(c = 1 - \sqrt{2}\).

The well-known Kolmogorov–Sinai theorem on generators [19] (see also [27, 28]) is the main tool for calculating the entropy of dynamical system. We conclude our contribution with the formulation of this theorem for the case of logical entropy.

Definition 4.4

A partition γ of Ω is said to be a generator of a dynamical system \((\Omega,S,\mu,T)\), if to any partition α of Ω there exists an integer \(k > 0\) such that \(\alpha\prec\bigvee_{i = 0}^{k}T^{ - i}(\gamma)\).

Theorem 4.7

Let γ be a generator of a dynamical system \((\Omega,S,\mu,T)\). Then

$$h_{L}(T) = h_{L}(T, \gamma). $$

Proof

Let γ be a generator of a dynamical system \((\Omega,S,\mu,T)\). Then to any partition α of Ω there exists an integer \(k > 0\) such that \(\alpha\prec\bigvee_{i = 0}^{k}T^{ - i}(\gamma)\).

Hence by Theorems 4.3 and 4.5 we get

$$h_{L}(T, \alpha) \le h_{L}\Biggl(T, \bigvee _{i = 0}^{k}T^{ - i}(\gamma)\Biggr) = h_{L}(T, \gamma). $$

Therefore

$$h_{L}(T) = \sup \bigl\{ h_{L}(T, \alpha) ; \alpha \text{ is a partition of } \Omega \bigr\} = h_{L}(T, \gamma). $$

 □

5 Conclusions

In this contribution we have extended the results of Ellerman presented in [1] to the case of dynamical systems. Our results are given in Sects. 3 and 4. In Sect. 3 we introduced the notions of logical entropy and logical conditional entropy of finite measurable partitions of a probability space and we examined basic properties of the proposed measures. We have provided some numerical examples to illustrate the results as well. In Sect. 4, the results of the previous part were used to introduce the concept of logical entropy of the dynamical system. It has been shown that two metrically isomorphic dynamical systems have the same logical entropy. Since the logical entropy distinguishes metrically non-isomorphic dynamical systems, we have acquired an alternative tool for distinguishing non-isomorphic dynamical systems. This result is illustrated by Example 4.2. Finally, we have proved a logical version of the Kolmogorov–Sinai theorem on generators (Theorem 4.7). In this study, it has been shown that by replacing the Shannon entropy function by the logical entropy function we obtain the results analogous to the case of classical Kolmogorov–Sinai entropy theory.