1 Introduction

The concept of entropy for a random variable was introduced by Shannon [29] to characterize the irreducible complexity of a particular sort of randomness. Nowadays this concept has applications in a wide range of fields including physics (statistical mechanics), electrical engineering (communication theory), computer science (algorithmic complexity) and portfolio theory. We refer to the book [11] for comprehensive treatment of this topics. The notion of entropy is closely related to the theory of quantum information developed in [25], see also [27] for the recent results on this topic. Signal processing and network traffic is also an important area for practical applications of entropy. In particular, it is used in some DDoS attacks detection algorithms [17]. Entropy measurements are applied also in medical and biological studies, since pathologies and aging can be distinguished by measuring physiological complexity [15]. For example, these ideas has been applied to distinguish Alzheimer states [24] and to classify signals from Parkinson’s disease patients [33].

This paper focuses on the entropy of fractional Gaussian noise, which has become a suitable model for many natural phenomena due to its stationarity, self-semilarity and presence of memory. In particular, the random motion of the particles during the diffusion process causes this type of noise [28]. Moreover, fractional Gaussian noise has been used to study meteorological data [32], traffic management analyses [13], and electrical measurements [16].

By definition, for a random variable \(\xi \) with probability density function \(p_\xi (x)\), the entropy (that is sometimes called differential entropy, see e.g. [19]) is given by the formula

$$\begin{aligned} \textbf{H}(\xi ) = - {\textsf{E}}\log p_\xi (\xi ) = - \int _{{\mathbb {R}}} p_\xi (x) \log p_\xi (x)\,dx. \end{aligned}$$

Entropy of Gaussian vector was in detail studied in the book [31]. It is not difficult, therefore, to write formulas for the entropy of a stationary Gaussian process with discrete time. A particular, but rather important and interesting case of a stationary Gaussian process with discrete time is the fractional Gaussian noise with the Hurst index \(H\in (0,1)\). On the one hand, it is not hard to produce the formula for the entropy of fractional Gaussian noise from formulas (5.4.5)–(5.4.6) in [31]. In the present paper we provide the corresponding expression for the entropy of this process, see (2.5)–(2.6).

On the other hand, note that the behavior of the fractional Gaussian noise substantially depends on its Hurst parameter H. In particular, it has long memory property for \(H\in (1/2, 1)\), and in the case \(H\in (0, 1/2)\) it is the process with short memory, see e. g., the book [20] and the papers [1,2,3, 10, 26]. Of course, these properties are closely connected to the properties of corresponding fractional operators: fractional integrals and derivatives that convert the Wiener process into the fractional Brownian motion. The properties of these operators are the subject of thousands of books and papers, let us mention only the recent general paper [18] and references therein. In our paper the properties of the fractional order operators will be reflected indirectly in a sense that we use the Mandelbrot–van Ness representation of fractional Brownian motion whose kernel contains fractional integrals of the indicator function.

It is natural to ask about the dependence of entropy (as a measure of the chaos of a dynamical system described by fractional Gaussian noise) on the Hurst index of the noise, because, as it was mentioned, the behavior of the noise itself essentially depends on this index. However, the behavior of the entropy of the fractional Gaussian noise as a function of \(H\in (0,1)\) has not been resolved, it has not even been raised. Apparently, the reason is the fact that the formula for the entropy of a Gaussian vector contains the determinant of the covariance matrix, and the behavior of this determinant at high dimensions is rather difficult to study analytically whatever method is used, for example, the Cholesky decomposition or expansion using eigenvalues. By studying the behavior of entropy numerically, we noticed the effect that the entropy of fractional Gaussian noise increases with increasing H from 0 to 1/2 and decreases with increasing H from 1/2 to 1. This is quite natural, since \(H=1/2\) corresponds to the sequence of independent random variables, and therefore its entropy is the greatest. This is our main hypothesis, we confirm it analytically for small n and numerically for large ones.

Since the seminal paper by Shannon [29], several different definitions for entropy measures have been proposed in the literature. Furthermore, many authors studied fractional Brownian motion and fractional Gaussian noise using various entropy concepts such as permutational entropy [4, 36], Tsallis entropy [35], multi-scale entropy [12, 23] and wavelet entropy [34]. Note that all these definitions are related to a covariance matrix of fractional Gaussian noise, whose theoretical study leads to many analytical and computational problems [21, 22]. In particular, it is impossible (or at least rather difficult) to investigate the properties of the entropy as the function of H for the high values of n. Therefore, in this paper we introduce two new alternative entropy functionals which depend on the elements of the covariance matrix, mimic the behavior of real entropy with respect to H and, at the same time, are quite easy for analytical study.

The paper is organized as follows. Section 2 is devoted to the behavior of the entropy of fractional Gaussian noise as a function of the Hurst parameter H for fixed n. We start with the definition of the entropy and exact formulas for it in the case of fractional Gaussian noise. We present the entropy as a surface of H and n which clearly show the behavior of the determinant itself, its logarithm and, as a consequence, the entropy as the functions of H and n. Then we study in detail two particular cases, namely \(n=2\) and \(n=3\) which support analytically the hypothesis that the entropy of fractional Gaussian noise increases with increasing H from 0 to 1/2 and decreases with increasing H from 1/2 to 1. In Sect. 3 we are interested in the behavior of the entropy as \(n\rightarrow \infty \). We derive the lower bounds for the entropy and for its limiting value known as entropy rate. Moreover, we give the exact formula for the entropy rate via spectral density. In Sect. 4 we introduce the alternative entropy functionals and study their monotonicity and asymptotic behaviour. Auxiliary results concerning stationary Gaussian processes and their entropy are collected in the Appendix A.

2 Entropy of fractional Gaussian noise as a function of H

2.1 Entropy of Gaussian vector

Recall again that the entropy of absolutely continuous random variable with probability density function \(p_\xi (x)\) is defined by

$$\begin{aligned} \textbf{H}(\xi ) = - {\textsf{E}}\log p_\xi (\xi ) = - \int _{{\mathbb {R}}} p_\xi (x) \log p_\xi (x)\,dx, \end{aligned}$$

see [31, Eq. (1.6.2)]. Similarly, one can define the entropy of n-dimensional absolutely continuous random vector, using the joint density of its components. In particular, if n-dimensional random vector \(\xi \) has a multivariate Gaussian distribution \({\mathcal {N}}(\mu _n,\varSigma _n)\) with mean \(\mu _n\) and covariance matrix \(\varSigma _n\), then the logarithm of its density equals

$$\begin{aligned} \log p_\xi (x) = - \frac{1}{2} (x-\mu _n)^\top \varSigma _n^{-1}(x-\mu _n) - \frac{n}{2}\log (2\pi ) -\frac{1}{2}\log (\det \varSigma _n), \quad x\in {\mathbb {R}}^n. \end{aligned}$$

Hence, the entropy of \(\xi \sim {\mathcal {N}}(\mu _n,\varSigma _n)\) is given by

$$\begin{aligned} \textbf{H}(\xi )=\frac{n}{2}\left( 1+\log (2\pi )\right) +\frac{1}{2}\log (\det \varSigma _n). \end{aligned}$$
(2.1)

This is a well-known formula, see [11, Theorem 8.4.1] or [31, Eq. (5.4.6)].

Remark 2.1

1. We use natural logarithm \(\log =\log _e\) in the definition of the entropy. Note that in the information theory (see, e. g., [11]) the entropy is sometimes defined using \(\log _2\) instead of \(\log \) (this is motivated by measurements in bits). In this case the formula (2.1) is written as follows:

$$\begin{aligned} \overline{\textbf{H}}(\xi )=\frac{1}{2}\log _2\bigl ((2\pi e)^n \det \varSigma _n\bigr ). \end{aligned}$$

2. For Gaussian vectors, Stratonovich in [31] introduced the alternative definition of the entropy, namely the entropy with respect to the measure \(\nu (d\xi _1,\dots ,d\xi _n) = (2\pi e)^{-n/2}d\xi _1\dots d\xi _n\). This approach leads to the following simplified version of (2.1):

$$\begin{aligned} \mathbf {\widetilde{H}}(\xi )=\frac{1}{2}\log (\det \varSigma _n). \end{aligned}$$
(2.2)

Remark 2.2

As we shall see below, the behavior of both versions of entropy, \(\textbf{H}(\xi )\) and \(\mathbf {\widetilde{H}}(\xi )\), as the function of Hurst index are the same and coincides with the behavior of \(\det \varSigma _n\): all of them increase in H when H increases from 0 to 1/2 and decrease when H increases from 1/2 to 1. Their behavior in n is different: \(\det \varSigma _n\) and consequently \(\mathbf {\widetilde{H}}(\xi )\) decrease in n for any fixed H, however, \(\textbf{H}(\xi )\) icreases in n, due to the linear term \(\frac{n}{2}\left( 1+\log (2\pi )\right) \).

2.2 Fractional Gaussian noise

Consider fractional Gaussian noise starting from zero. Let \(B^H = \left\{ B^H_t, t\ge 0\right\} \) be a fractional Brownian motion (fBm) with Hurst index \(H\in (0,1)\), i.e., a centered Gaussian process with covariance function of the form

$$\begin{aligned} {\textsf{E}}B^H_t B^H_s = \frac{1}{2}\left( t^{2H} + s^{2H} - \left|t-s\right|^{2H}\right) . \end{aligned}$$
(2.3)

Let us consider the following discrete-time process:

$$\begin{aligned}G^H_k = B^H_k - B^H_{k-1}, \quad k=1,2,3,\dots \end{aligned}$$

It is well known that the process \(B^H\) has stationary increments, which implies that \(\left\{ G^H_k,k\ge 1\right\} \) is a stationary Gaussian sequence (known as fractional Gaussian noise). It follows from (2.3) that its autocovariance function is given by

$$\begin{aligned} \rho _0(H) = 1, \;\; \rho _k(H) = {\textsf{E}}G^H_1 G^H_{k+1} =\frac{1}{2} \left( (k+1)^{2H} - 2k^{2H} + (k-1)^{2H}\right) , \, k\ge 1.\nonumber \\ \end{aligned}$$
(2.4)

Therefore, according to (2.1), the entropy of \((G^H_1, \dots , G^H_n)\) equals

$$\begin{aligned} \textbf{H}(G^H_1, \dots , G^H_n)=\frac{n}{2}\left( 1+\log (2\pi )\right) +\frac{1}{2}\log (\det \varSigma _n(H)), \end{aligned}$$
(2.5)

where

$$\begin{aligned} \varSigma _n(H) = {{\,\textrm{cov}\,}}(G^H_1, \dots , G^H_n) = \begin{pmatrix} 1 &{} \rho _1(H) &{} \rho _2(H) &{} \dots &{} \rho _{n-1}(H) \\ \rho _1(H) &{} 1 &{} \rho _1(H) &{} \dots &{} \rho _{n-2}(H) \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \\ \rho _{n-1}(H) &{} \rho _{n-2}(H) &{} \rho _{n-3}(H) &{} \dots &{} 1 \end{pmatrix}.\nonumber \\ \end{aligned}$$
(2.6)

Formula (2.2) is transformed to

$$\begin{aligned} \widetilde{\textbf{H}}(G^H_1, \dots , G^H_n)=\frac{1}{2}\log (\det \varSigma _n(H)). \end{aligned}$$

Remark 2.3

The definition of a fractional Gaussian noise \(G^H\) (as well as the definition of a fractional Brownian motion) can be extended to the cases \(H=0\) and \(H=1\). The corresponding processes \(G^0\) and \(G^1\) are defined as a limit of \(G^H\) in the sense of convergence of finite-dimensional distributions as \(H\downarrow 0\) and \(H\uparrow 1\) respectively.

Let \(H=0\). Note that the elements of the covariance matrix \(\varSigma _n(H)\) have finite limits as \(H\downarrow 0\), namely

$$\begin{aligned} \rho _1 (H) = \frac{1}{2} \left( 2^{2H} - 2\right) \rightarrow -\frac{1}{2} =:\rho _1(0) \;\text { and }\; \rho _k(H) \rightarrow 0, \; k\ge 2. \end{aligned}$$

This implies that finite-dimensional distributions of \(G^H\) converge to those of a Gaussian process with the following covariance matrix

$$\begin{aligned} \varSigma _n(0) = \begin{pmatrix} 1 &{} -\frac{1}{2} &{} \cdots &{} 0 &{} 0 \\ -\frac{1}{2} &{} 1 &{} \cdots &{} 0 &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} 1 &{} -\frac{1}{2} \\ 0 &{} 0 &{} \cdots &{} -\frac{1}{2} &{} 1 \end{pmatrix}. \end{aligned}$$
(2.7)

This Gaussian process can be viewed as fractional Gaussian noise with \(H=0\).

It is worth mentioning that the process \(G^0\) can be constructed explicitly as \(G^0_k = B^0_k - B^0_{k-1}\), \(k=1,2,3,\dots \), using a white noise of the form \(B^0_t = \frac{\xi _t-\xi _0}{\sqrt{2}}\), where \(\left\{ \xi _t, t\ge 0\right\} \) are \({\mathcal {N}}(0,1)\) independent random variables [8]. Indeed, in this case

$$\begin{aligned} \rho _0(0) = 1, \quad \rho _1(0) = \frac{1}{2} {\textsf{E}}(\xi _1-\xi _0)(\xi _2-\xi _1) = -\frac{1}{2} \quad \text {and}\quad \rho _k(0)=0,\; k\ge 2. \end{aligned}$$

Let \(H=1\). As \(H\uparrow 1\),

$$\begin{aligned} \rho _k(H)&= \frac{1}{2} \left( (k+1)^{2H} + (k-1)^{2H} - 2 k^{2H} \right) \\&\rightarrow \frac{1}{2} \left( (k+1)^{2} + (k-1)^{2} - 2 k^{2} \right) = 1. \end{aligned}$$

Therefore, \(G^1\) is a Gaussian process with the singular covariance matrix \(\varSigma _n(1)\) consisting of ones. Note that this process can be also constructed as \(G_k^1 = \xi \sim {\mathcal {N}}(0,1)\), \(k\ge 0\). It corresponds to the following definition of the fractional Brownian motion with \(H=1\): \(B^1_t = \xi t\).

Remark 2.4

Let us mention several particular cases, when the determinant \(\det \varSigma _n(H)\) can be calculated explicitly.

Let \(H=\frac{1}{2}\). Then all \(\rho _k(\frac{1}{2})=0\), \(k\ge 1\), and \(\rho _0(\frac{1}{2})=1\). Therefore, \(\det \varSigma _n(\frac{1}{2})=1\), \(n\ge 1\), and consequently \(\log (\det \varSigma _n(\frac{1}{2}))=0.\)

Let \(H=1\). Then \(\rho _k(1)=1\), \(k\ge 0\). This means that for any \(n\ge 2\) \(\det \varSigma _n(1) = 0\), and consequently \(\log (\det \varSigma _n(1))=-\infty .\)

Let \(H=0\). Then the covariance matrix \(\varSigma _n (0)\) is tridiagonal, see (2.7); its determinant is calculated by the formula

$$\begin{aligned} \det \varSigma _n (0)&= \det \varSigma _{n-1}(0) - \frac{1}{4} \det \varSigma _{n-2}(0) = \dots \\&= \frac{k+1}{2^k} \det \varSigma _{n-k}(0) - \frac{k}{2^{k+1}} \det \varSigma _{n-k-1}(0), \end{aligned}$$

where \(\det \varSigma _0(0) = 1\), \(\det \varSigma _{-1}(0) = 0\). Therefore

$$\begin{aligned}\det \varSigma _n(0) = \frac{n+1}{2^n}, \quad n\ge 1,\end{aligned}$$

and consequently \(\log (\det \varSigma _n(0))=\log (n+1)-n\log 2.\) Obviously, both

\(\det \varSigma _n(0)\) and \(\log (\det \varSigma _n(0))\) decrease in n and tend to zero and \(-\infty \), respectively.

Fig. 1
figure 1

\(\det \varSigma _n(H)\) as a function of H and n

Fig. 2
figure 2

\(\log \det \varSigma _n(H)\) as a function of H and n

Fig. 3
figure 3

\(\textbf{H}(G^H_1, \dots , G^H_n)\) as a function of H and n

It is quite difficult to prove the monotonic properties of \(\det \varSigma _n(H)\) and its logarithm analytically in general case. Therefore our main conjecture

(A):

\( \det \varSigma _n(H) \) and \(\log (\det \varSigma _n(H))\) increase from \(\frac{n+1}{2^n }\) to 1 and from \(\log (n+1)-n\log 2 \) to 0, respectively, when H increases from 0 to \(\frac{1}{2}\), and decrease from 1 to 0 and from 0 to \(-\infty \), respectively when H increases from \(\frac{1}{2}\) to 1, decreasing in n for any fixed H

is in general checked numerically.

The surface of \( \det \varSigma _n(H)\) as a function of H and n is presented at Fig. . We observe that for any fixed \(n\ge 2\) \(\det \varSigma _n(H)\) increases in \(H\in (0,\frac{1}{2})\) and decreases in \(H\in (\frac{1}{2},1)\). Also, it decreases in n for any \(H\in (0,1)\). Figures  and  present entropies \(2\mathbf {\widetilde{H}}(G^H_1, \dots , G^H_n)\) and \(\textbf{H}(G^H_1, \dots , G^H_n)\), respectively. We arrange these entropies surfaces in this order, because \(\det \varSigma _n(H)\) and \(\mathbf {\widetilde{H}}(G^H_1, \dots , G^H_n) = \frac{1}{2} \det \varSigma _n(H)\) have similar behavior with respect to both arguments, H and n. In turn, the entropy \(\textbf{H}(G^H_1, \dots , G^H_n)\) demonstrate different behavior with respect to n due to an additional term in the equality \(\textbf{H}(\xi )={\widetilde{\textbf{H}}}(\xi ) + \frac{n}{2}\left( 1+\log (2\pi )\right) \).

However, below we study in more detail two particular cases, namely \(n=2\) and \(n=3\) and prove that they increase when H increases from 0 to \(\frac{1}{2}\) and decrease when H increases from \(\frac{1}{2}\) to 1. As we shall see, even in the case \(n=3\) the proof of monotonicity requires a lot of technical work.

2.3 Cases \(n=2\) and \(n=3\)

Consider the determinants for \(n=2\) and \(n=3\) in the spirit of their monotonicity in H.

Lemma 2.5

(Case \(n=2\)) The determinant \(\det \varSigma _2(H)\) increases from \(\frac{3}{4}\) to 1 when H increases from 0 to \(\frac{1}{2}\) and decreases from 1 to 0 when H increases from \(\frac{1}{2}\) to 1. Consequently, \(\log (\det \varSigma _2(H))\) increases from \(\log {3}-2\log 2\) to 0 when H increases from 0 to \(\frac{1}{2}\) and decreases from 0 to \(-\infty \) when H increases from \(\frac{1}{2}\) to 1.

Proof

For \(n=2\), we have

(2.8)

where

$$\begin{aligned} \rho _1(H) = \frac{1}{2}\left( 2^{2H} - 2\right) = 2^{2H-1} - 1. \end{aligned}$$

So,

$$\begin{aligned} \det \varSigma _2(H) = 1 - \left( 2^{2H-1} - 1\right) ^2 = 1 - 2^{4H-2} + 2^{2H} - 1 = - 2^{4H-2} + 2^{2H}. \end{aligned}$$

Consider function

$$\begin{aligned} \varphi _2(H) = - 2^{4H-2} + 2^{2H}, \quad H\in (0,1). \end{aligned}$$

Its derivative equals

$$\begin{aligned} \varphi _2'(H) = - 4 \cdot 2^{4H-2} \log 2 + 2 \cdot 2^{2H} \log 2 = 2^{2H+1} \log 2 \left( 1 - 2^{2H-1}\right) , \end{aligned}$$

and \(\varphi _2'(H) > 0\) for \(H\in (0,\frac{1}{2})\), \(\varphi _2'(H) < 0\) for \(H\in (\frac{1}{2},1)\). \(\square \)

Lemma 2.6

(Case \(n=3\)) The determinant \(\det \varSigma _3(H)\) increases from \(\frac{1}{2}\) to 1 when H increases from 0 to \(\frac{1}{2}\) and decreases from 1 to 0 when H increases from \(\frac{1}{2}\) to 1. Consequently, \(\log (\det \varSigma _3(H))\) increases from \(-\log 2\) to 0 when H increases from 0 to \(\frac{1}{2}\) and decreases from 0 to \(-\infty \) when H increases from \(\frac{1}{2}\) to 1.

Proof

The value of the determinant equals

(2.9)

where

$$\begin{aligned} \rho _2(H) = \frac{1}{2}\left( 3^{2H} - 2^{2H+1} + 1\right) . \end{aligned}$$

Consider function

$$\begin{aligned} \varphi _3(H) = 1 + 2 x^2 y - y^2 - 2x^2, \;\text { where } x = \rho _1(H),\; y = \rho _2(H), \end{aligned}$$

and calculate its derivative in H:

$$\begin{aligned} \varphi _3'(H)&= 4x x'_H y - 2y y'_H - 4 x x'_H + 2 x^2 y'_H\\&= 2\bigl [x(2yx'_H + x y'_H) - (y y'_H +2 x x'_H)\bigr ], \end{aligned}$$

where \(x'_H = \frac{d}{dH}\rho _1(H)\), \(y'_H = \frac{d}{dH}\rho _2(H).\)

First, let \(H\in (\frac{1}{2},1]\). Then

$$\begin{aligned} x'_H = 2^{2H}\log 2 > 0, \quad y'_H = 3^{2H}\log 3 - 2\cdot 2^{2H}\log 2. \end{aligned}$$

Let us prove that \(y'_H>0\). Indeed,

$$\begin{aligned} y'_H = 2^{2H+1}\log 2 \left( \left( \frac{3}{2}\right) ^{2H}\frac{\log 3}{\log 4} - 1\right) . \end{aligned}$$

If \(H=\frac{1}{2}\), then

$$\begin{aligned} \left( \frac{3}{2}\right) ^{2H}\frac{\log 3}{\log 4} - 1 =\frac{\log 27}{\log 16} - 1 > 0. \end{aligned}$$

Since \(y'_H\) evidently increases in H, it is strictly positive. Note that \(x\le 1\). Therefore for \(H\in (\frac{1}{2},1]\)

$$\begin{aligned} \varphi _3'(H) < 2\bigl (2yx'_H + x y'_H - y y'_H - 2 x x'_H\bigr ) = 2(x-y)(y'_H - 2 x'_H). \end{aligned}$$

Further,

$$\begin{aligned} x-y = 2^{2H-1} - 1 - \frac{1}{2}\cdot 3^{2H} + 2^{2H} - \frac{1}{2} = \frac{3}{2}\left( 2^{2H} - 3^{2H-1} - 1\right) =:\psi (H). \end{aligned}$$

It is easy to see that \(\psi (\frac{1}{2}) = \psi (1) = 0\). Its second derivative equals

$$\begin{aligned} \psi ''(H) = 6\left( 2^{2H}\log ^2 2 - 3^{2H-1}\log ^2 3\right) = 6\cdot 2^{2H}\log ^2 3\left( \frac{\log ^2 2}{\log ^2 3} - \left( \frac{3}{2}\right) ^{2H}\!\cdot \frac{1}{3}\right) . \end{aligned}$$

Let \(H=\frac{1}{2}\). Then \(\frac{\log ^2 2}{\log ^2 3} - \frac{1}{2} <0\), which is confirmed by approximate calculations up to three decimal places:

$$\begin{aligned} \frac{\log ^2 2}{\log ^2 3} - \frac{1}{2} \approx \frac{0.693^2}{1.099^2} - \frac{1}{2} \approx \frac{480249}{1207801} - \frac{1}{2}. \end{aligned}$$

It means that \(\psi ''(H) < 0\) on the interval \([\frac{1}{2},1]\). Moreover,

$$\begin{aligned} \psi '(\tfrac{1}{2}) = 3\left( 2\log 2 - \log 3\right) >0. \end{aligned}$$

It means that on the interval \([\frac{1}{2},1]\)

$$\begin{aligned} \psi (H) = x - y > 0. \end{aligned}$$

Let us analyze

$$\begin{aligned} \zeta (H) = y'_H - 2 x'_H = 3^{2H} \log 3 - 4\cdot 2^{2H}\log 2 =2^{2H} \log 3 \left( \left( \frac{3}{2}\right) ^{2H} - \frac{\log 16}{\log 3}\right) . \end{aligned}$$

If \(H=1\), then

$$\begin{aligned} \left( \frac{3}{2}\right) ^{2H} - \frac{\log 16}{\log 3} =\frac{9}{4} - \frac{\log 16}{\log 3} < 0, \end{aligned}$$

which is confirmed by approximate calculations: \(\frac{9}{4} - \frac{\log 16}{\log 3} \approx \frac{9}{4} - 2.524\). Consequently, \(\zeta (H) < 0\), and \(\varphi '_3(H) < 0\) that is equivalent to decreasing of the determinant \(\det \varSigma _3(H)\) on [1/2, 1].

Now, let \(H\in [0,\frac{1}{2})\). While \(x'_H>0\), the situation with \(y'_H\) is more involved. Denote \(H_0 \approx 0.2868143617175754\), the unique root of the equation

$$\begin{aligned} 3^{2H} \log 3 - 2 \cdot 2^{2H} \log 2 = 0. \end{aligned}$$

Then \(y'_H<0\) on \([0,H_0)\) and \(y'_H>0\) on \((H_0,\frac{1}{2}]\). If \(H \in [H_0,\frac{1}{2}]\), then in the formula for \(\varphi _3'(H)\) we have

$$\begin{aligned} x \le 0, \quad y \le 0, \quad x'_H \ge 0, \quad y'_H \ge 0, \end{aligned}$$

whence

$$\begin{aligned} xyx'_H \ge 0, \quad -2yy'_H \ge 0, \quad -4xx'_H \ge 0, \quad 2x^2y'_H \ge 0, \end{aligned}$$

i. e. \(\varphi _3'(H) \ge 0\).

Now, let \(H \in [0,H_0]\). Transform \(\varphi _3'(H)\) as follows:

$$\begin{aligned} \varphi _3'(H) = 2\bigl [2xyx'_H - y y'_H - 2 x x'_H + x^2 y'_H\bigr ] = 2\bigl [2x'_H x (y-1) - y'_H \left( y - x^2\right) \bigr ]. \end{aligned}$$

Further, \(\left|x\right| < 1\), therefore \(y - x^2 > y - 1\), and on \([0,H_0]\)

$$\begin{aligned} \left( -y'_H\right) \left( y - x^2\right) > \left( -y'_H\right) (y - 1). \end{aligned}$$

So,

$$\begin{aligned} \varphi _3'(H) > 2(y-1) \left( 2xx'_H - y'_H\right) . \end{aligned}$$

Obviously, \(y-1<0\). Consider

$$\begin{aligned} 2xx'_H - y'_H&= 2\left( 2^{2H-1} - 1\right) \cdot 2^{2H} \log 2 - 3^{2H} \log 3 + 2\cdot 2^{2H} \log 2\\&= 2^{4H} \log 2 - 2\cdot 2^{2H} \log 2 - 3^{2H} \log 3 + 2\cdot 2^{2H}\log 2\\&= 3^{2H} \log 2 \left( \left( \frac{4}{3}\right) ^{2H} - \frac{\log 3}{\log 2}\right) <0 \end{aligned}$$

for any \(H\in [0,H_0]\) (in fact, for any \(H\in [0,\frac{1}{2}]\)). Therefore, \(\varphi _3'(H) > 0\) that is equivalent to increasing of the determinant \(\det \varSigma _3(H)\) on [0, 1/2].

Remark 2.7

For all \(H\in (0,1)\), \(\det \varSigma _2(H)\ge \det \varSigma _3(H)\) (where the equality is achieved only for \(H=\frac{1}{2}\) and for \(H\uparrow 1\)). Indeed, by (2.8) and (2.9), we get

$$\begin{aligned} \det \varSigma _2(H) - \det \varSigma _3(H)&= \rho _1^2(H) - 2 \rho _1^2(H) \rho _2(H) + \rho _2^2(H)\\&= (\rho _1(H)-\rho _2(H))^2 + 2 \rho _1(H) \rho _2(H) (1 - \rho _1(H)) \ge 0, \end{aligned}$$

since \(\rho _1(H)\le 1\), and \(\rho _1(H)\) and \(\rho _2(H)\) have the same sign (they both are negative for \(H\in (0,\frac{1}{2})\) and positive for \(H\in (\frac{1}{2},1)\)). Figure  contains the graphs of \(\det \varSigma _2(H)\) and \(\det \varSigma _3(H)\).

In the general case, the monotonicity of \(\det \varSigma _n(H)\) as a function of n can be proved by representing it as a product of conditional variances, see Remark A.5 in the Appendix A.

Fig. 4
figure 4

Graphs of \(\det \varSigma _2(H)\) (blue) and \(\det \varSigma _3(H)\) (orange)

3 Entropy, entropy rate and innovation variance. Lower bound for innovation variance

3.1 Fractional Gaussian noise on the whole axis

Until now, we have considered the entropy of stationary fractional Gaussian noise starting from zero. However, quite often stationary processes start from \(-\infty \), especially if the question of their regularity and some other properties are being investigated. Therefore, we recall how we can construct fractional Gaussian noise starting from \(-\infty \). For this purpose we use the Mandelbrot–van Ness representation of the fractional Brownian motion. Let us briefly recall the concepts related to this object.

Standard two-sided Brownian motion is a process \(W=\{W_t, \; t\in \mathbb {R}\}\) constructed as a couple of two independent Brownian motions \(\{W_{-t}, \; t\ge 0\}\) and \(\{W_t, \; t\ge 0\}\), one with the time reflected. Two-sided fractional Brownian motion is a zero-mean Gaussian process \(B^H=\{B^H_t, t\in \mathbb {R}\}\) with covariance function

$$\begin{aligned} {\textsf{E}}B^H_s B^H_t = \frac{1}{2} (|s|^{2H} + |t|^{2H} - |s-t|^{2H}). \end{aligned}$$

It admits the Mandelbrot–van Ness representation

$$\begin{aligned} B^H_t = c_H \int _{-\infty }^t \left( (t-s)_{+}^{H-\frac{1}{2}} - (-s)_{+}^{H-\frac{1}{2}}\right) dW_s, \end{aligned}$$
(3.1)

where \(c_H=\frac{(2\,H \sin (\pi H)\varGamma (2\,H))^{1/2}}{\varGamma (H + 1/2)} = \left( \frac{2\,H \varGamma (3/2-H)}{\varGamma (H + 1/2)\varGamma (2-2\,H)}\right) ^{1/2}\). Obviously, process \(B^H\) has stationary increments \(B^H_s - B^H_{s-1},\,s\in \mathbb {R},\) whose covariance equals

$$\begin{aligned}&{\textsf{E}}\left( B^H_s - B^H_{s-1}\right) \left( B^H_t - B^H_{t-1}\right) \\&= \frac{1}{2}\left( |s-t-1|^{2H} - 2 |s-t|^{2H} + |s-t+1|^{2H}\right) , \quad s, t\in {\mathbb {R}}. \end{aligned}$$

3.2 Lower bound for the innovation variance

According to Proposition A.4 in the Appendix A, the entropy of a stationary Gaussian process \(\left\{ X_k,k=1,2,\dots \right\} \) can be expressed in terms of the following conditional variances:

$$\begin{aligned} r(k) = {{\,\textrm{var}\,}}[X_k \mid X_1,\ldots ,X_{k-1}], \end{aligned}$$
(3.2)

see formula (A.6). The values r(k) are deterministic, nonnegative and decreasing, see statement 1 of Proposition A.4. Hence, there exists the finite limit

$$\begin{aligned} \sigma ^2_{\textrm{inov}}(X) = \lim _{n\rightarrow \infty } r(n) \ge 0, \end{aligned}$$
(3.3)

which is called innovation variance.

Furthermore, for a stationary Gaussian process we have

$$\begin{aligned} \sigma ^2_{\textrm{inov}}(X)&= \lim _{n\rightarrow \infty } r(n) = \lim _{n\rightarrow \infty } {{\,\textrm{var}\,}}[X_n \mid X_{n-1},\ldots ,X_1] \nonumber \\ {}&= \lim _{n\rightarrow \infty } {{\,\textrm{var}\,}}[X_t \mid X_{t-1},\ldots ,X_{t-n+1}] \nonumber \\ {}&= {{\,\textrm{var}\,}}[X_t \mid X_{t-1}, X_{t-2}, \ldots ] \qquad \hbox { for all}\ t\in \mathbb {R} . \end{aligned}$$
(3.4)

It turns out that for fractional Gaussian noise \(G^H\) the limit (3.3) is strictly positive for all H, and moreover, it admits the following lower bound.

Theorem 3.1

(Lower bound for the innovation variance) For all \(H\in (0,1)\),

$$\begin{aligned} \sigma ^2_{\textrm{inov}}(G^H) \ge \frac{\varGamma \!\left( \frac{3}{2} - H\right) }{\varGamma \!\left( H+\frac{1}{2}\right) \varGamma (2-2H)} =:\sigma ^2_H. \end{aligned}$$
(3.5)

Proof

As a particular case of (3.4),

$$\begin{aligned} \sigma ^2_{\textrm{inov}} \left( G^H\right) = {{\,\textrm{var}\,}}\left[ G^H_1 \mid G^H_0, G^H_1, G^H_{-1}, \ldots \right] . \end{aligned}$$

Notice that \(G^H_1 = B^H_1\), and all \(G^H_t = B^H_t - B^H_{t-1}\), \(t\mathbin {\le }0\), can be represented as integrals w.r.t. the Brownian motion \(\{W_t, \; t\le 0\}\) with use of (3.1), whence

$$\begin{aligned} \sigma (G^H_0, G^H_{-1},G^H_{-2}, \ldots ) \subset \sigma (W_s, \; s\le 0). \end{aligned}$$

By the partitioning of conditional variance, see (A.3),

$$\begin{aligned} \sigma ^2_{\textrm{inov}}\left( G^H\right) = {{\,\textrm{var}\,}}[B^H_1 \mid G^H_0, G^H_{-1}, G^H_{-2}, \ldots ] \ge {{\,\textrm{var}\,}}[B^H_1 \mid W_s, \; s\le 0]. \end{aligned}$$
(3.6)

Finally, since the process \(\{B^H_t,\; t>0\}\) is a Volterra Gaussian process with the representation (3.1), we see that the conditional variance in the right-hand side of (3.6) can be calculated by the formula (A.2) as follows

$$\begin{aligned} {{\,\textrm{var}\,}}[B^H_1 \mid W_s, \; s\le 0] = \int _0^1 c_H^2 (1-s)^{2H-1} \, ds = \frac{c_H^2}{2H} = \frac{\varGamma \!\left( \frac{3}{2} - H\right) }{\varGamma \!\left( H+\frac{1}{2}\right) \varGamma (2-2H)}. \\ \end{aligned}$$

\(\square \)

3.3 Lower bound for the entropy and the entropy rate

Taking (3.1) into account, let us study the asymptotic behavior of the entropy of fractional Gaussian noise as \(n\rightarrow \infty \). We start with the definition of entropy rate, see [11, Eq. (4.2)].

Definition 3.2

The entropy rate of a discrete-time stochastic process X is

$$\begin{aligned} \textbf{H}_\infty (X) = \lim _{n\rightarrow \infty } \frac{{\textbf{H}}(X_1,\ldots ,X_n)}{n} \end{aligned}$$

if this limit exists.

For the case of Gaussian process X, we may define also

$$\begin{aligned} \widetilde{\textbf{H}}_\infty (X) = \lim _{n\rightarrow \infty } \frac{\widetilde{{\textbf{H}}}(X_1,\ldots ,X_n)}{n}, \end{aligned}$$

where \(\widetilde{{\textbf{H}}}(X_1,\ldots ,X_n)\) is introduced in (2.2).

Let X be a stationary Gaussian process. Then, applying Proposition A.4 from Appendix A, we obtain that its entropy rate equals

$$\begin{aligned} {\textbf{H}}_\infty (X) = \frac{1 + \log (2\pi )}{2} + \frac{1}{2} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{k=1}^n \log r(k), \end{aligned}$$

where r(k) is defined by (3.2). If \(\sigma ^2_{\textrm{inov}} (X) > 0\), then

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{k=1}^n \log r(k) = \lim _{k\rightarrow \infty } \log r(k) = \log (\sigma ^2_{\textrm{inov}}(X)), \end{aligned}$$

hence,

$$\begin{aligned} {\textbf{H}}_\infty (X) = \frac{1 + \log (2\pi )}{2} + \log \sigma _{\textrm{inov}}(X). \end{aligned}$$
(3.7)

If \(\sigma _{\textrm{inov}}^2(X) = 0\), then the entropy rate of the process X is infinite: \({\textbf{H}}_\infty (X) = -\infty \).

Using the results of previous subsection, we can see that for the fractional Gaussian noise \(G^H\), the entropy rate exists and moreover, it admits a finite lower bound. Namely, we have the following result.

Theorem 3.3

(Lower bounds for the entropy and entropy rate) The entropy and the entropy rate of fractional Gaussian noise satisfy inequalities:

$$\begin{aligned} {\textbf{H}}(G^H_1,\ldots ,G^H_n)\ge & {} \frac{n}{2} \Bigl ( 1 + \log (2\pi ) + \log \sigma ^2_H\Bigr ),\nonumber \\ {\textbf{H}}_\infty (G^H)\ge & {} \frac{ 1 + \log (2\pi )}{2} + \log \sigma _H, \end{aligned}$$
(3.8)

where \(\sigma ^2_H\) is defined in (3.5).

Proof

Since \(G^H\) is a stationary Gaussian process, we have by Proposition A.4,

$$\begin{aligned} {\textbf{H}}(G^H_1,\ldots ,G^H_n) = \frac{n + n \log (2\pi )}{2} + \frac{1}{2} \sum _{k=1}^n \log r(k) \ge \frac{n}{2} \Bigl ( 1 + \log (2\pi ) + \log \sigma ^2_H\Bigr ), \end{aligned}$$

since \(r(k)\ge \sigma _{\textrm{inov}}^2\ge \sigma ^2_H\) for all k, see Theorem 3.1. The inequality (3.8) follows immediately from the representation (3.7) and the lower bound (3.5). \(\square \)

3.4 Calculation of the entropy rate via spectral density

According to [31, Eq. (5.5.17)], the entropy rate of the stationary Gaussian process X can be expressed in the form

$$\begin{aligned} {\textbf{H}}_\infty (X) = \frac{1 + \log (2\pi )}{2} + \frac{1}{2} \int _{-1/2}^{1/2} \log \bigl (2\pi f(2\pi \mu )\bigr ) d\mu , \end{aligned}$$
(3.9)

where \(f(\lambda ) =\frac{1}{2\pi } \sum _{k=-\infty }^\infty \gamma (k) e^{i \lambda k}\), \(-\pi \le \lambda \le \pi \). In particular, for fractional Gaussian noise, this approach leads to the following result.

Lemma 3.4

The entropy rate of the fractional Gaussian noise admits the following representation:

$$\begin{aligned} {\textbf{H}}_\infty (G^H)&= \frac{1}{2}\left( 1 +\log \left( \sin (\pi H)\varGamma (2H+1)(2\pi )^{-2H}\right) \right) \nonumber \\&\quad + \frac{1}{2} \int _{-1/2}^{1/2} \log \left( \sum _{k=-\infty }^{+\infty } |\mu + k|^{-2H-1}\right) d\mu . \end{aligned}$$
(3.10)

Proof

According to [6, Proposition 2.1] the spectral density of fractional Gaussian noise \(G^H\) is given by

$$\begin{aligned} f(\lambda )&= \frac{1}{2\pi } \sum _{k=-\infty }^{+\infty } \rho _k(H) e^{ik\lambda }\\&= \frac{1}{\pi }\sin (\pi H)\varGamma (2H+1)(1-\cos \lambda )\sum _{k=-\infty }^{+\infty } |\lambda +2\pi k|^{-2H-1}, \quad -\pi \le \lambda \le \pi . \end{aligned}$$

Therefore, it follows from (3.9) that the entropy rate can be calculated as follows

$$\begin{aligned} {\textbf{H}}_\infty (G^H)&= \frac{1}{2}\left( 1 +\log \left( 2\sin (\pi H)\varGamma (2H+1)(2\pi )^{-2H}\right) \right) \\&\quad + \frac{1}{2} \int _{-\frac{1}{2}}^{\frac{1}{2}} \log \bigl (1-\cos (2\pi \mu )\bigr ) d\mu \\&\quad + \frac{1}{2} \int _{-1/2}^{1/2} \log \left( \sum _{k=-\infty }^{+\infty } |\mu + k|^{-2H-1}\right) d\mu . \end{aligned}$$

It is not hard to compute \(\int _{-\frac{1}{2}}^{\frac{1}{2}} \log \bigl (1-\cos (2\pi \mu )\bigr ) d\mu = -\log 2\), whence (3.10) follows. \(\square \)

Remark 3.5

For computational reasons, it may be convenient to express the infinite sum from (3.10) as

$$\begin{aligned} \sum _{k=-\infty }^{+\infty } |\mu + k|^{-2H-1} = \zeta (2H+1,\mu ) + \zeta (2H+1,-\mu ) - |\mu |^{-2H-1}, \end{aligned}$$

where \(\zeta (s,a) = \sum _{k=0}^\infty \left|a+k\right|^{-s}\) denotes the Hurwitz zeta function.

Figure  contains the graphs of \(\frac{1}{n} {\textbf{H}}(G_1^H,\dots ,G_n^H)\) for \(n = 10\), 50, and 100 together with the entropy rate \({\textbf{H}}_\infty (G^H)\) (computed by the formula (3.10)) and the lower bound (3.8). From one hand, it confirms the convergence of the normalized entropies to the entropy rate. From the other hand, we see that formula (3.8) gives rather accurate lower bound for all values of H. Moreover, the graph of \({\textbf{H}}_\infty (G^H)\) confirms the following theoretical values for particular cases (see Remark 2.4).

$$\begin{aligned} H&=0:&{\textbf{H}}_\infty \left( G^0\right)&= \lim \limits _{n\rightarrow \infty } \frac{1}{2} \left( 1+\log \pi +\frac{1}{n}\log (n+1)\right) \\{} & {} {}&= \frac{1}{2} \left( 1+\log \pi \right) \approx 1.07236;\\ H&=\tfrac{1}{2}:&{\textbf{H}}_\infty \left( G^{\frac{1}{2}}\right)&= \frac{1}{2} \bigl (1+\log (2\pi )\bigr )\approx 1.41894;\\ H&=1:&{\textbf{H}}_\infty \left( G^{1}\right)&= -\infty . \end{aligned}$$
Fig. 5
figure 5

The normalized entropy \({\textbf{H}}(G_1^H,\dots ,G_n^H)/n\) for \(n = 10\), 50, and 100, the entropy rate \({\textbf{H}}_\infty (G^H)\), and the lower bound (3.8)

4 Entropy functionals

4.1 Definition and the main properties of entropy functionals

Let us introduce two alternative entropy functionals that are based on the elements of covariance matrix in the following way: the first functional is proportional to the sum of squares of all different elements of covariance matrix for \(H\in (0,1)\):

$$\begin{aligned} E^1_H(N)&=-\frac{(H-1/2)^2}{1-H}F^1_H(N) =-\frac{(H-1/2)^2}{1-H} \sum _{k=1}^N (2\rho _k(H))^2\\&=-\frac{(H-1/2)^2}{1-H}\left( \sum _{k=2}^N \left( (k+1)^{2H} + (k-1)^{2H} -2k^{2H} \right) ^2 + \left( 2^{2H} -2\right) ^2\right) , \end{aligned}$$

and the second functional is related to the covariance matrix as follows:

$$\begin{aligned} E^2_H(N)&=-\frac{(H-1/2)^2}{1-H}F^2_H(N) =-\frac{(H-1/2)^2}{1-H}\sum _{k=1}^N (N-k+1)\left|2\rho _k(H)\right|\\ {}&= -\frac{(H-1/2)^2}{1-H}\Biggl (\sum _{k=2}^N (N-k+1)\left|(k+1)^{2H} + (k-1)^{2H} -2k^{2H}\right|\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad + N\left|2^{2H} -2\right|\Biggr ). \end{aligned}$$

Remark 4.1

In both cases we separated the term \(2^{2H} -2\) that corresponds to \(k=1\) because we intend to study the behaviour of both functionals as functions of \(H\in [0,1]\), and its behaviour differs from other terms. Recall also that for \(H\in [1/2, 1]\) the absolute values in \(E^2_H(N)\) can be omitted.

Theorem 4.2

Both functionals \(E^1_H(N)\) and \(E^2_H(N)\) for any fixed \(N\ge 2\) have the following behaviour as the functions of \(H\in [0,1]\): they increase in \(H\in [0,\frac{1}{2}]\), are zero for \(H=\frac{1}{2}\) and decrease in \(H\in [\frac{1}{2},1]\). Functional \(E^1_H(N)\) increases from \(-1/4\) to 0 and decreases from 0 to \(-\infty \), and \(E^2_H(N)\) increases from \(-N/4\) to 0 and decreases from 0 to \(-\infty \).

Proof

Note that the function \(\phi (H)=\frac{(H-1/2)^2}{1-H}\) has a derivative

$$\begin{aligned} \phi '(H)=\frac{(H-1/2)(3/2-H)}{(1-H)^2}, \end{aligned}$$

therefore it decreases on [0, 1/2] and increases on [1/2, 1] being nonnegative. Therefore it is sufficient to establish that \(F^i_H(N), i=1,2\) decrease in H when H increases from 0 to 1/2 and increase in H when H increases from 1/2 to 1. First, consider \(H\in (\frac{1}{2},1]\). Then \((k+1)^{2\,H} + (k-1)^{2\,H} -2k^{2\,H} > 0\) and

$$\begin{aligned} \frac{\partial F^1_H(N)}{\partial H}&= 4 \sum _{k=2}^N \left( (k+1)^{2H} + (k-1)^{2H} -2k^{2H} \right) \\&\quad \times \left( (k+1)^{2H} \log (k+1) + (k-1)^{2H}\log (k-1) -2k^{2H}\log k \right) \\&\quad + 4 \left( 2^{2H} -2\right) 2^{2H} \log 2;\\ \frac{\partial F^2_H(N)}{\partial H}&= 2 \sum _{k=2}^N (N-k+1)\\&\quad \times \bigl ((k+1)^{2H} \log (k+1) + (k-1)^{2H}\log (k-1) -2k^{2H}\log k \bigr ) \\&\quad + 2N 2^{2H} \log 2. \end{aligned}$$

Let us analyze the value

$$\begin{aligned} \zeta (k,H) = (k+1)^{2H} \log (k+1) + (k-1)^{2H}\log (k-1) -2k^{2H}\log k. \end{aligned}$$

Consider the function

$$\begin{aligned} \varphi (x) = x^{2H} \log x,\quad x\ge 1, \; 2H>1. \end{aligned}$$

Its second derivative equals

$$\begin{aligned} \varphi ''(x) = x^{2H-2} \bigl (2H(2H-1)\log x +4H-1\bigr ), \end{aligned}$$
(4.1)

and for \(x\ge 1\) \(\varphi (x) = x^{2H} \log x>0\). It means that \(\varphi \) is convex for \(x\ge 1\), whence \(\zeta (k,H) >0\) for \(k\ge 2\), \(H>\frac{1}{2}\). Obviously, both additional terms \(4 \left( 2^{2\,H} -2\right) 2^{2\,H} \log 2\) and \(2N 2^{2\,H} \log 2\) are strictly positive. So, both derivatives, \(\frac{\partial F^i_H(N)}{\partial H}>0\), \(i=1,2\), \(H\in (\frac{1}{2},1]\), and so \(F^1_H(N)\) and \(F^2_H(N)\) are strictly increasing in H from 0 to \(F^1_1(N) = 2^2 N\) and \(F^2_1(N) = N (N+1)\).

Second, consider \(H\in [0,\frac{1}{2})\). In this case \((k+1)^{2\,H} + (k-1)^{2\,H} -2k^{2\,H} < 0\) for \(k\ge 2\), therefore, it is more convenient to rewrite \(\frac{\partial F^1_H(N)}{\partial H}\) as

$$\begin{aligned} \begin{aligned} \frac{\partial F^1_H(N)}{\partial H}&= 4 \sum _{k=2}^N \left( 2k^{2H} - (k+1)^{2H} - (k-1)^{2H} \right) \\*&\quad \times \left( 2k^{2H}\log k - (k+1)^{2H} \log (k+1) - (k-1)^{2H}\log (k-1)\right) \\&\quad + 4 \log 2\cdot 2^{2H} \left( 2^{2H} -2\right) . \end{aligned} \end{aligned}$$
(4.2)

Let us analyze the behaviour of all terms in (4.2). Consider again function \(\varphi \) from (4.1). Its second derivative is negative for such x that \(\log x > \frac{4\,H-1}{2\,H(1-2\,H)}\) and is positive if \(\log x < \frac{4\,H-1}{2\,H(1-2\,H)}\). Since we consider \(x\ge 1\), for \(H\le \frac{1}{4}\) we have that \(\varphi ''(x)<0\) for all \(x\ge 1\), and for \(H\in (\frac{1}{4},\frac{1}{2})\) \(\varphi ''(x)>0\) for \(x\in (1,x_0)\) and \(\varphi ''(x)<0\) for \(x\in (x_0,\infty )\), where \(x_0 = \exp \left( \frac{4\,H-1}{2\,H(1-2\,H)}\right) \). Put \(N_0 = \lfloor x_0\rfloor \). Then

$$\begin{aligned} \frac{\partial F^1_H(N)}{\partial H}&< 4 \sum _{k=N_0}^N \left( 2k^{2H} - (k+1)^{2H} - (k-1)^{2H} \right) \\*&\quad \times \left( 2k^{2H}\log k - (k+1)^{2H} \log (k+1) - (k-1)^{2H}\log (k-1)\right) \\*&\quad + 4 \log 2\cdot 2^{2H} \left( 2^{2H} -2\right) . \end{aligned}$$

For any fixed \(H\in (0,\frac{1}{2})\) \(\psi (k)=2k^{2\,H} - (k+1)^{2\,H} - (k-1)^{2\,H} \) has a derivative \(\frac{\partial \psi }{\partial k}(k)=2\,H\left( 2k^{2\,H-1} - (k+1)^{2\,H-1} - (k-1)^{2\,H-1}\right) <0\), therefore,

$$\begin{aligned} \frac{\partial F^1_H(N)}{\partial H}&< 4 \left( 2N_0^{2H} - (N_0+1)^{2H} - (N_0-1)^{2H} \right) \\&\quad \times \sum _{k=N_0}^N \left( 2k^{2H}\log k - (k+1)^{2H} \log (k+1) - (k-1)^{2H}\log (k-1)\right) \\&\quad + 4 \log 2\cdot 2^{2H} \left( 2^{2H} -2\right) \\&=4 \left( 2N_0^{2H} - (N_0+1)^{2H} - (N_0-1)^{2H} \right) \bigl (N^{2H}\log N \\&\quad - (N+1)^{2H} \log (N+1) + N_0^{2H}\log N_0 - (N_0-1)^{2H} \log (N_0-1)\bigr ) \\&\quad + 4 \log 2\cdot 2^{2H} \left( 2^{2H} -2\right) \\&<4 \left( 2N_0^{2H} - (N_0+1)^{2H} - (N_0-1)^{2H} \right) \\&\quad \times \bigl ( N_0^{2H}\log N_0 - (N_0-1)^{2H} \log (N_0-1)\bigr ) + 4 \log 2\!\cdot \!2^{2H} \left( 2^{2H} -2\right) \\&<4 \left( 2 - 2^{2H}\right) \left( N_0^{2H}\log N_0 - (N_0-1)^{2H} \log (N_0-1)-2^{2H}\log 2\right) . \end{aligned}$$

Again, for fixed H consider function

$$\begin{aligned} \zeta (x) = x^{2H}\log x - (x-1)^{2H} \log (x-1), \quad x\ge N_0. \end{aligned}$$

Its derivative equals

$$\begin{aligned} \zeta '(x) = (2H\log x + 1)x^{2H-1} - (x-1)^{2H-1}(2H\log (x-1) + 1), \quad x\ge N_0 \end{aligned}$$

and function \(\delta (x) = x^{2H-1}(2H\log x + 1)\) has \(\delta '(x) = \varphi ''(x)<0\), \(x\ge N_0\). Therefore, \(\zeta '(x)<0\), \(x\ge N_0\), and

$$\begin{aligned} N_0^{2H}\log N_0 - (N_0-1)^{2H} \log (N_0-1)-2^{2H}\log 2 <2^{2H}\log 2-2^{2H}\log 2=0. \nonumber \\ \end{aligned}$$
(4.3)

Concerning \(F^2_H(N)\), for \(H\in [0,\frac{1}{2})\) it equals

$$\begin{aligned} F^2_H(N) = \sum _{k=2}^N (N-k+1)\left( 2k^{2H} - (k+1)^{2H} - (k-1)^{2H}\right) + N\left( 2-2^{2H}\right) \end{aligned}$$

and

$$\begin{aligned} \frac{\partial F^2_H(N)}{\partial H}&= 2 \sum _{k=2}^N (N-k+1) \\*&\quad \times \left( 2k^{2H}\log k - (k+1)^{2H} \log (k+1) - (k-1)^{2H}\log (k-1) \right) \\*&\quad - 2N 2^{2H} \log 2 \\&<2N\!\sum _{k=N_0}^N\!\!\left( 2k^{2H}\log k - (k+1)^{2H} \log (k+1) - (k-1)^{2H}\log (k-1) \right) \\*&\quad - 2N 2^{2H} \log 2 \\&\le 2N\bigl (N^{2H}\log N - (N+1)^{2H} \log (N+1) + N_0^{2H}\log N_0 \\*&\quad - (N_0-1)^{2H}\log (N_0-1) - 2^{2H} \log 2\bigr )<0 \end{aligned}$$

due to (4.3). \(\square \)

4.2 Entropy rate for entropy functionals

It is very easy to see from formula (2.4) that \(\rho _k(H)\) decrease in k for \(H\in (1/2,1)\) being positive and increase in k for \(H\in (0,1/2)\) being negative, therefore all the summands in \((2\rho _k(H))^2\) in \(E_H^1(N) \) decrease in k. Moreover,

$$\begin{aligned} 2\rho _k(H)&= k^{2H}\left( \left( 1+\frac{1}{k}\right) ^{2H} + \left( 1-\frac{1}{k}\right) ^{2H} - 2\right) \sim 2k^{2H}\frac{2H(2H-1)}{2k^2}\\&= 2H(2H-1)k^{2H-2}, \quad \text {as } k\rightarrow \infty , \end{aligned}$$

therefore \(\bigl (2\rho _k(H)\bigr )^2 \sim 4H^2(2H-1)^2k^{4H-4}\) as \(k\rightarrow \infty \). It means that entropy functional \(E_H^1(N)\) has the following asymptotic properties.

Lemma 4.3

  1. (i)

    Let \(H\in (0,\frac{3}{4})\). Then the series \(\sum _{k=1}^\infty \bigl (2\rho _k(H)\bigr )^2\) converges, and

    $$\begin{aligned} E_H^1(N) \rightarrow E_H^1(\infty ) = - \frac{(H-\frac{1}{2})^2}{1-H} \sum _{k=1}^\infty \bigl (2\rho _k(H)\bigr )^2 \quad \text {as } N\rightarrow \infty . \end{aligned}$$
  2. (ii)

    Let \(H = \frac{3}{4}\). Then

    $$\begin{aligned} \lim _{N\rightarrow \infty } \frac{E_H^1(N)}{\log N} = -\frac{9}{16}. \end{aligned}$$
  3. (iii)

    Let \(H\in (\frac{3}{4},1)\). Then

    $$\begin{aligned} \lim _{N\rightarrow \infty } \frac{E_H^1(N)}{N^{4H-3}} = -\frac{4H^2(2H-1)^4}{(1-H)(4H-3)}. \end{aligned}$$

Proof

Item (i) is evident. (ii) Indeed, with \(H=3/4\)

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{E_H^1(N)}{\log N}&= \lim _{N\rightarrow \infty } \frac{E_{3/4}^1(N)}{\log N} = -\frac{(H-\frac{1}{2})^2}{1-H}\lim _{N\rightarrow \infty } \frac{\bigl (2\rho _k(\frac{3}{4})\bigr )^2}{\frac{1}{N}}\\&= -\frac{(H-\frac{1}{2})^2}{1-H} \cdot \frac{4^2 H^2 (2H-1)^2 N^{-1}}{N^{-1}} = - \frac{4 H^2 (2H-1)^4}{{1-H}} = -\frac{9}{16}. \end{aligned}$$

(iii) Indeed,

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{E_H^1(N)}{N^{4H-3}}&= -\frac{(H-\frac{1}{2})^2}{1-H} \lim _{N\rightarrow \infty } \frac{4^2 H^2 (2H-1)^2 N^{4H-4}}{(4H-3)N^{4H-4}}\\&= -\frac{4H^2(2H-1)^4}{(1-H)(4H-3)}. \end{aligned}$$

\(\square \)

Lemma 4.4

  1. (i)

    Let \(H\in (0,\frac{1}{2})\). Then

    $$\begin{aligned} \lim _{N\rightarrow \infty } E_H^2(N) = - \sum _{k=1}^\infty \left|\rho _k(H)\right| \frac{(H-\frac{1}{2})^2}{1-H}. \end{aligned}$$
  2. (ii)

    Let \(H = \frac{1}{2}\). Then \(E_H^2(N) = 0\), \(N\ge 1\), and its limit equals zero.

  3. (iii)

    Let \(H\in (\frac{1}{2},1)\). Then

    $$\begin{aligned} \lim _{N\rightarrow \infty } \frac{E_H^2(N)}{N^{2H}} = - \frac{(H-\frac{1}{2})^2}{1-H}. \end{aligned}$$

Proof

Consider separately

$$\begin{aligned} S_N^1&= N \sum _{k=2}^N \left|(k+1)^{2H} + (k-1)^{2H} -2k^{2H}\right| \end{aligned}$$

and

$$\begin{aligned} S_N^2&= \sum _{k=2}^N (k-1)\left|(k+1)^{2H} + (k-1)^{2H} -2k^{2H}\right|. \end{aligned}$$

(i) Let \(H\in (0,\frac{1}{2})\). Then \(\left|(k+1)^{2\,H} + (k-1)^{2\,H} -2k^{2\,H}\right| \sim k^{2\,H-2} 2\,H (1-2\,H)\), and \(\sum _{k=2}^\infty \left|(k+1)^{2\,H} + (k-1)^{2\,H} -2k^{2\,H}\right| < \infty \). Therefore

$$\begin{aligned} \frac{S^1_N}{N} \rightarrow \sum _{k=2}^\infty \left|(k+1)^{2H} + (k-1)^{2H} -2k^{2H}\right|, \quad \text {as } N\rightarrow \infty . \end{aligned}$$

Further, \((k-1)\left|(k+1)^{2\,H} + (k-1)^{2\,H} -2k^{2\,H}\right| \sim k^{2\,H-1}2\,H(1-2\,H)\). Therefore

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{S^2_N}{N} \lim _{N\rightarrow \infty } N^{2H-1}2H(1-2H) = 0. \end{aligned}$$

(iii) Let \(H\in (\frac{1}{2},1)\). Then

$$\begin{aligned} \frac{S^1_N}{N^{2H}} \sim \frac{N^{2H-2}2H(2H-1)}{(2H-1)N^{2H-2}}, \quad \text {so}\quad \lim _{N\rightarrow \infty } \frac{S^1_N}{N^{2H}} = 2H. \end{aligned}$$

Further,

$$\begin{aligned} \frac{S^2_N}{N^{2H}} \sim \frac{N^{2H-1}2H(2H-1)}{2H N^{2H-1}}, \quad \text {so}\quad \lim _{N\rightarrow \infty } \frac{S^2_N}{N^{2H}} = 2H-1, \end{aligned}$$

whence the proof follows. \(\square \)