1 Introduction

Lotka and Volterra were pioneers in the mathematical exploration of the temporal dynamics of competing species within the field of biology. They investigated a dynamical system commonly referred to as Lotka–Volterra (LV) systems [46, 58]. The significance of discrete-time models derived from LV systems has been demonstrated in the realm of applied mathematics [40, 46, 48, 53, 57].

On the other hand, exploring non-Lotka-Volterra (non-LV) systems, such as those observed in oceanic plankton, is crucial as they offer valuable insights into real-world scenarios [35, p.71]. While LV systems have been suggested for modeling interactions among biochemical populations [39], nonlinear models, including stochastic ones, prove more adept at capturing intricate dynamics [23, 54]. Quadratic models, a straightforward form of non-LV operators, have been extensively examined in the realm of genetic models [36, 45]. F-quadratic stochastic operators, a specific category of non-LV operators, find application in modeling sexual systems. Other non-LV operators, such as those studied in [41], demonstrate that stochastic operators offer a means to analyze population dynamics effectively. This diverse exploration underscores the importance of considering various mathematical frameworks to gain a comprehensive understanding of population dynamics [43, 47, 52].

Therefore, it is natural to explore more general nonlinear stochastic operators on a finite-dimensional simplex, including an investigation of their properties and the proof of each operator’s stability. The biological significance of the stability (regularity) of stochastic operators is also discussed (see [43]), as the distribution of species in the next generation will coincide with the species distribution in the previous one, making it stable in the long run (see also [36]). In the present paper, we introduce a class of F-stochastic operators on a finite-dimensional simplex, each of which is regular, ascertaining that the species distribution in the next generation corresponds to the species distribution in the previous one in the long run. Furthermore, we proposed a new scheme to define non-homogeneous Markov chain associated with F-stochastic operators and given initial data [38, 55]. This scheme is elaborated to define non-homogeneous entangled quantum Markov chains [5, 7, 8, 31] which are quantum lifting of random walks and classical Markov chains.

We highlight the introduction of quantum Markov chains (QMCs) by L. Accardi [1] as a profound extension of classical Markov chains. Subsequently, extensive exploration has been undertaken, encompassing both theoretical investigations [9, 10] and practical applications [2,3,4, 6, 12, 24, 26, 32, 56].

In simpler words, this passage highlights how mixing properties in Markov chains play a crucial role in speeding up random algorithms. This idea has been supported by various research papers and even extended to quantum algorithms, where it has led to even faster computations. Inspired by these successes, this section will demonstrate that a specific type of quantum Markov chain possesses a valuable property called ”psi-mixing.” This property is essential for efficient mixing and, consequently, faster algorithms [16, 33, 34].

Noteworthy is the introduction of quantum counterparts to mixing times for Markov chains [11, 13, 21, 50] has proven instrumental in expediting quantum algorithms and their applications, as exemplified in [25, 27,28,29]. This work aims to further explore these connections by demonstrating the \(\psi\)-mixing property in a specific type of non-homogeneous entangled quantum Markov chain.

Motivated by the exceptional efficiency of quantum algorithms and their significant impact on mixing properties, we aim to establish, in the forthcoming section, that the defined non-homogeneous entangled quantum Markov chain (QMC) exhibits the \(\psi\)-mixing property.

2 Preliminaries

Let \(E = \{1, \dots , m\}\) be a finite set. Then, the set of all probability distributions on E is the \((m-1)\)-dimensional simplex, which is given by

$$\begin{aligned} S^{m-1} =\left\{ \textbf{x} = (x_1, x_2,\dots ,x_m)\in \mathbb {R}^m: \ x_i \ge 0, \text { for any } i \, \text{ and } \sum _{i=1}^m x_i=1\right\} . \end{aligned}$$

A mapping \(V:\mathbb {R}^m_+\rightarrow \mathbb {R}^m_+\) is called stochastic if \(V(S^{m-1})\subset S^{m-1}\).

The trajectory \(\{\textbf{x}^{(n)}\}_{n=0}^\infty\) of V for an initial value \(\textbf{x}^{(0)}\in S^{m-1}\) is defined by

$$\begin{aligned} \textbf{x}^{(n)} = \underbrace{V \circ V \circ \cdots \circ V}_n\left( \textbf{x}^{(0)}\right) , \quad n = 1, 2, \dots \end{aligned}$$

A point \(\textbf{x}\in S^{m-1}\) is called a fixed point of an operator V if \(V(\textbf{x}) = \textbf{x}\). The set of fixed points of V is denoted by Fix(V).

A key issue in mathematical biology is studying the asymptotic behavior of trajectories for a given stochastic operator. \(\textbf{x}^{(0)}\in S^{m-1}\) and a given stochastic operator V. Even in low-dimensional settings, this problem remains open [37, 42].

A stochastic operator V is called stable (or regular) if the limit

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } V^n(\textbf{x}) \end{aligned}$$

exists for any initial value \(\textbf{x}\in S^{m-1}\).

This paper focuses on examining a particular group of discrete-time dynamical systems produced by F-stochastic operators. Our study reveals that these operators are regular, meaning that the future of such systems is stable, allowing for future predictions in biological systems. Additionally, we introduce a new method of defining non-homogeneous Markov measures based on F-stochastic operators and given initial data. We investigate whether these measures are mutually absolutely continuous or singular.

3 F-stochastic Operators

Let us consider a mapping \(W:S^m\rightarrow S^m\) given by

$$\begin{aligned} \left( W(\textbf{x})\right) _k:\left\{ \begin{array}{ll} x'_0=1-\sum \limits _{k=1}^{m} f_k(\textbf{x}), \ \ k=0,\\[3mm] x'_k=f_k(\textbf{x}),\ \ k=1,\dots ,m, \end{array}\right. \end{aligned}$$
(1)

where \(\textbf{x}=(x_0,x_1,\dots ,x_m)\in S^m\) and \(f_k: S^m\rightarrow [0,1]\) is a continuous function \((k\ge 1)\).

Definition 3.1

An operator \(W: S^m \rightarrow S^m\) defined by (1) is called F-stochastic operator if the following conditions are satisfied:

  1. (C1)

    There exists a continuous function \(\varphi : \mathbb {R}^m_+ \rightarrow \mathbb {R}_+\) such that \(\varphi (\textbf{x}) = 0\) implies \(\textbf{x} = 0\), where \(\varphi (\textbf{x}) = \varphi (x_1, x_2, \dots , x_m)\).

  2. (C2)

    For all \(\textbf{x} \in S^m\), \(\sum \limits _{k=1}^m f_k(\textbf{x}) \le \varphi (\textbf{x})\).

  3. (C3)

    There exists an increasing continuous function \(g: [0, 1] \rightarrow [0, 1]\) such that \(g^n(x) \rightarrow 0\) as \(n \rightarrow \infty\) for all \(x \in [0, 1]\) with

    $$\begin{aligned} \varphi \left( W(\textbf{x})\right) \le g\left( 1-x'_0\right) , \quad \forall \textbf{x} \in S^m, \end{aligned}$$

    where \(x'_0 = \left( W(\textbf{x})\right) _0\).

The set of \(F-\) stochastic operators, we denote by \({\mathcal {F}}\). From Definition 3.1 (see (C2)) we infer that if \(W\in {\mathcal {F}},\) then \(\textbf{e}_0=(1,0,\dots ,0)\) is a fixed point of W.

By Definition 3.1 to each \(F-\) stochastic operator, it is associated with two functions \(\varphi\) and g. Therefore, the set of such operators is denoted by \({\mathcal {F}}_{\varphi , g}\). It is obvious that \({\mathcal {F}}=\bigcup \limits _{\varphi , g} {\mathcal {F}}_{\varphi , g}.\)

Example 3.2

Let us provide an example of an F-stochastic operator. Take an increasing functions \(h:[0,1]\rightarrow [0,1]\) such that \(h(x)\le x\) for all \(x\in [0,1]\). Define a mapping on \(S^2\) by

$$\begin{aligned} W_h(\textbf{x})=\left\{ \begin{array}{ll} x'_0=1-h\left( ax_1\right) -h\left( bx_2\right) , \\[2mm] x'_1=h(bx_2), \\[2mm] x'_2=h(ax_1), \end{array}\right. \ \ \ a,b\in (0,1] \end{aligned}$$

which is clearly a stochastic operator. Assume that

$$\begin{aligned} \varphi (x_1,x_2)=a x_1+b x_2, \ \ g(x)=x. \end{aligned}$$

Then \(W_h\) is F-stochastic operator. Indeed, it is evident that \(\varphi\) satisfies the condition (C1). Due to

$$\begin{aligned} h(bx_2)+h(ax_1)\le a x_1+b x_2 \le \varphi (x_1,x_2) \end{aligned}$$
(2)

one infers that (C2) holds. The condition (C3) immediately follows from

$$\begin{aligned} \varphi \left( x'_1,x'_2\right) =h(bx_2)+h(ax_1)\le ab x_1+ ab x_2\le g\left( 1-x'_0\right) . \end{aligned}$$

The following outcome demonstrates that every F-stochastic operator is regular.

Theorem 3.3

Let W be an F-stochastic operator on \(S^m\) such that \(W \in {\mathcal {F}}_{\varphi , g}\). Then, for any \(\textbf{x}, \textbf{y} \in S^m\), one has

$$\begin{aligned} \left\| W^{n+1}(\textbf{x}) - W^{n+1}(\textbf{y})\right\| _1 \le 2\left( g^n(\varphi (\textbf{x})) + g^n(\varphi (\textbf{y}))\right) , \quad n \in \mathbb {N}. \end{aligned}$$
(3)

In particular,

$$\begin{aligned} \left\| W^{n+1}(\textbf{x}) - \textbf{e}_0\right\| _1 \le 2 g^n(\varphi (\textbf{x})). \end{aligned}$$
(4)

Proof

Due to \(W\in {\mathcal {F}}_{\varphi , g}\) one can find the functions \(\varphi (\textbf{x})\) and g(x) given in Definition 3.1. Then by (C2) we have

$$\begin{aligned} \varphi (W(\textbf{x}))\le & {} g\left( 1-x'_0\right) \nonumber \\= & {} g\left( 1- \left( 1-\sum \limits _{k=1}^m f_k(\textbf{x})\right) \right) \nonumber \\= & {} g\left( \sum \limits _{k=1}^m f_k(\textbf{x})\right) \nonumber \\\le & {} g(\varphi (\textbf{x})). \end{aligned}$$

Hence,

$$\begin{aligned} \varphi \left( W^{n+1}(\textbf{x})\right) \le g\left( \varphi (W^{n}(\textbf{x}))\right) \le \cdots \le g^n(\varphi (\textbf{x})) \end{aligned}$$
(5)

From (1), one gets

$$\begin{aligned} \Vert W(\textbf{x}) - W(\textbf{y})\Vert _1= & {} \bigg | 1- \sum _{k=1}^{m}f_k(\textbf{x}) - 1 + \sum _{k=1}^{m}f_k(\textbf{y})\bigg | + \sum _{j=1}^{m}|f_j(\textbf{x}) - f_j(\textbf{y})|\\\le & {} 2\sum _{j=1}^{m}|f_j(\textbf{x}) - f_j(\textbf{y})|\\\le & {} 2\Big (\sum _{j=1}^{m}f_j(\textbf{x}) + \sum _{j=1}^{m}f_j(\textbf{y}) \Big )\\\le & {} 2(\varphi (\textbf{x}) + \varphi (\textbf{y})) \end{aligned}$$

Hence, by (5), for any \(n\in \mathbb {N}\)

$$\begin{aligned} \left\| W^{n+1}(\textbf{x}) - W^{n+1}(\textbf{y}) \right\| _1 \le 2\left( \varphi (W^{n}(\textbf{x})) + \varphi (W^{n}(\textbf{y}))\right) \le 2\left( g^n(\varphi (\textbf{x})) + g^n(\varphi (\textbf{y}))\right) . \end{aligned}$$

If \(\textbf{y}= \textbf{e}_0\), since \(\varphi (\textbf{e}_0) = 0\), we arrive at (4). \(\square\)

Now, given a collection \(X = \{\textbf{x}_1,\dots , \textbf{x}_{m+1}\}\subset S^{m}\), we define for each \(n\in \mathbb {N}\), the vectors

$$\begin{aligned} \textbf{x}_k^{(n)} = W^n(\textbf{x}_k) = \Big ( x_{k,0}^{(n)}, x_{k,1}^{(n)}, \dots , x_{k,m}^{(n)} \Big ), \quad k\in \{1,2,\dots , m+1\} \end{aligned}$$

Using these vector we define a sequence \(\{\mathbb {P}_{n;X}\}\) of stochastic matrices as follows:

$$\begin{aligned} \mathbb {P}_{n;X}=\left( \begin{array}{cccc} x_{1,0}^{(n)} &{} x_{1,1}^{(n)} &{} \dots &{} x_{1,m}^{(n)} \\ x_{2,0}^{(n)} &{} x_{2,1}^{(n)} &{} \dots &{} x_{2,m}^{(n)} \\ \vdots &{} \vdots &{} \cdots &{} \vdots \\ x_{m+1,0}^{(n)} &{} x_{m+1,1}^{(n)} &{} \dots &{} x_{m+1,m}^{(n)} \\ \end{array} \right) \end{aligned}$$
(6)

In short we may write \(\mathbb {P}_{n;X}:=(P_{ij,X}^{(n)})_{i,j=1}^{m+1}\), where \(P_{ij,X}^{(n)}=x_{i,j-1}^{(n)}\). It is clear that \(\mathbb {P}_{n;X}\) is an \((m+1)\times (m+1)\) stochastic matrix.

By means of the sequence \(\{\mathbb {P}_{n;X}\}\) one can define non-homogeneous Markov chain (NHMC). For \(k<n\), we put

$$\begin{aligned} P^{k,n}_X:=\mathbb {P}_{n;X} \mathbb {P}_{n-1;X}\ldots \mathbb {P}_{k+1;X}. \end{aligned}$$

We notice that the sequence \(\{\mathbb {P}_{n;X}\}\) is called the generating sequence of the NHMC. Therefore, every NHMC \(\{P^{k,n}_X\}\) can be identified with its generating sequence.

Consider a projection operator P, i.e. \(P^2=P\). We say that a non-homogeneous Markov chain \(\{P^{m,n}_X\}\) is called uniformly P-ergodic if for every \(k\ge 0\) one has

$$\begin{aligned} \lim _{n\rightarrow \infty }\left\| P^{k,n}_X-P\right\| =0. \end{aligned}$$

We recall that given a stochastic matrix \(\mathbb {T}=\left( T_{ij}\right) _{i,j=1}^m\) its Dobrushin ergodicity coefficient is calculated as follows

$$\begin{aligned} \delta (\mathbb {T})=\frac{1}{2}\max _{i<j} \sum _{k=1}^{m}\left| t_{i k}-t_{j k}\right| . \end{aligned}$$

A non-homogeneous Markov chain \(\{P^{k,n}_X\}\) is weakly ergodic if \(\delta \left( {P}^{k, n}_X\right) \underset{n \rightarrow \infty }{\rightarrow }\ 0\) for every \(k\in \mathbb {N}\).

The uniform and weak ergodicities have been intensively investigated in [49, 50].

Denote

$$\begin{aligned} \mathbb {P} = \left( \begin{array}{ccccc} 1 &{} 0&{} \cdots &{} 0 \\ 1 &{} 0&{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \cdots &{} 0 \\ 1 &{} 0 &{} \cdots &{} 0 \\ \end{array} \right) \end{aligned}$$

Moreover, one can check that

$$\begin{aligned} \mathbb {P}_{n;X} \mathbb {P}=\mathbb {P}, \quad \forall n\geqslant 1. \end{aligned}$$

Next result was proved in [49, 51] which is about perturbations of uniformly ergodic NHMC.

Theorem 3.4

Let \(\{T_n\}, \{S_n\}\) be two generating sequences of NHMCs \(\{T^{k,n}\}, \{S^{k,n}\}\), respectively. Assume that \(T_{n}P=P\), \(S_{n}P=P\), and \(\sum _{n=1}^{\infty }\Vert T_n-S_n\Vert <\infty\). Then \(\{T^{m, n}\}\) is uniformly P-ergodic if and only if \(\{S^{m, n}\}\) is uniformly P-ergodic.

Theorem 3.5

Let W be a F-stochastic operator such that \(W\in {\mathcal {F}}_{\varphi , g}\) with

$$\begin{aligned} \sum _{n=1}^{\infty } g^{n}(x)<\infty , \ \ \ \forall x\in [0,1]. \end{aligned}$$
(7)

Then, the non homogeneous Markov chain \(\{P^{m,n}_X\}\) is uniformly ergodic.

Proof

We first notice that \(P_{ij,X}^{(n)}=W^n(\textbf{x}_{i})_{j-1}\). Therefore, from (4) one gets

$$\begin{aligned} \left\| W^{n+1}(\textbf{x}_i) - \textbf{e}_0\right\| _1 \le 2 g^n\left( \varphi (\textbf{x}_i)\right) , \quad i =1,2, \dots , m+1 \end{aligned}$$

which implies

$$\begin{aligned}{} & {} \left| P^{(n+1)}_{i,1;X} -1\right| \le 2 g^n(\varphi (\textbf{x}_i)) \end{aligned}$$
(8)
$$\begin{aligned}{} & {} P^{(n+1)}_{i,j;X} \le 2 g^n(\varphi (\textbf{x}_i)), \end{aligned}$$
(9)

for all \(i\in \{1,\dots , m+1\}, j\in \{2,\dots ,m+1\}\).

Hence, from (20) and (8),(9), we infer that

$$\begin{aligned} \sum _{i=1}^{\infty }\left\| \mathbb {P}_{n,X}-\mathbb {P}\right\| <\infty . \end{aligned}$$

Since the chosen homogeneous chain \(\left\{ S_{n}\right\}\), where \(S_{n} = \mathbb {P}\) for all \(n \ge 1\), is uniformly ergodic, then by Theorem 3.4, it follows that \({P}^{k,n}_X\) is uniformly ergodic. This completes the proof. \(\square\)

Theorem 3.6

The non-homogeneous Markov chain \({P}^{k,n}_X\) is weakly ergodic.

Proof

Due to (3), we first calculate

$$\begin{aligned} \delta \left( \mathbb {P}_{\ell + 1,X}\right)= & {} \frac{1}{2} \max _{i<j} \left\| W^{\ell +1}(\textbf{x}_{i})-W^{\ell +1}\left( \textbf{x}_{j}\right) \right\| _{1} \\{} & {} \le \max _{i<j}\left( g^{(\ell )}(\varphi (\textbf{x}_{i})) + g^{(\ell )}\left( \varphi (\textbf{x}_{j})\right) \right) . \end{aligned}$$

Due to \(g^{(n)}(x) \rightarrow 0\) as \(n \rightarrow \infty\) there exists \(n_{0} \in \mathbb {N}\) such that

$$\begin{aligned} g^{(n)}\left( \textbf{x}_{j}\right) \le \frac{1}{3}, \ \ \forall n \geqslant n_{0}, \forall j \in \{1,\dots , m+1\}. \end{aligned}$$

Hence, \(\delta \left( \mathbb {P}_{\ell + 1,X}\right) \leqslant \frac{2}{3} \quad \forall \ell \geqslant n_0\)

Consequently,

$$\begin{aligned} \delta \left( {P}^{k,n}_X\right)= & {} \delta \left( \mathbb {P}_{n,X} \cdots \mathbb {P}_{k+1,X}\right) \\\le & {} \delta \left( \mathbb {P}_{n,X}\right) \cdots \delta \left( \mathbb {P}_{k+1,X}\right) \\\le & {} \left( \frac{2}{3}\right) ^{n-N_{0}}, \quad N_{0}=\max \left\{ k, n_{0}\right\} . \end{aligned}$$

So \(\delta \left( {P}^{k, n}_X\right) \underset{n \rightarrow \infty }{\longrightarrow }\ 0\), which yields the weak ergodicity. This completes the proof. \(\square\)

We point out that according to Theorems 3.5 and 3.6, one concludes that the chain is weakly ergodic while (20) is not satisfied.

4 Entangled Quantum Markov Chains

In this section, we are going to construct entangled quantum Markov chains associated with NHMC \({P}^{k,n}_X\).

Let \(d\in {\mathbb {N}}\), and consider the C\(^*\)–algebra \({\mathcal {A}}:= {\mathcal {M}}_d\) of \(d\times d\) matrices with complex entries with identity \(\textbf{1 }\!\!\!\textrm{I}\). Let \(D = \{1,\dots , d\}\). Consider the quasi-local algebra \({\mathcal {A}}_{\mathbb {N}} = \bigotimes _{n\in \mathbb {N}}{\mathcal {M}}_d\). Let \(\Vert \cdot \Vert\) be the C\(^*\)-norm on \({\mathcal {M}}_d\).

For every \(i,j\in D\), we consider the matrix \(E_{ij} = (\delta _{ik}\delta _{jl})_{1\le k,l\le d}\), where \(\delta\) denotes the Kronecker symbol. In the following, for each \(n\in \mathbb {N}\),

$$\begin{aligned} j_n: {\mathcal {M}}_d\mapsto {\mathcal {A}}_{n}:= \underbrace{\textbf{1 }\!\!\!\textrm{I}\otimes \textbf{1 }\!\!\!\textrm{I}\otimes \cdots \otimes \textbf{1 }\!\!\!\textrm{I}}_{\hbox { n factors}} \otimes {\mathcal {M}}_{d}\otimes \textbf{1 }\!\!\!\textrm{I}\cdots \subset {\mathcal {A}}_{\mathbb {N}} \end{aligned}$$
(10)

Denote the embedding into the n–th factor of the algebra \({\mathcal {A}}_{\mathbb {N}}\) by \(j_n\). The shift endomorphism [18, 20, 30] on \({\mathcal {A}}_{\mathbb {N}}\) is denoted by \(\sigma\), and it satisfies

$$\begin{aligned} \sigma \circ j_n = j_{n+1}. \end{aligned}$$

For any \(\Lambda \subset _{\text {fin}} \mathbb {N}\), let’s define the local algebra on \(\Lambda\) by

$$\begin{aligned} {\mathcal {A}}_{\Lambda }:= \bigvee _{n\in \Lambda}{\mathcal {A}}_n\equiv \bigotimes _{n\in \Lambda }{\mathcal {A}}_{n}. \end{aligned}$$
(11)

In particular, for each \(n\in \mathbb {N}\), we write \({\mathcal {A}}_{[0,n]} = \bigotimes _{m=0}^{n}{\mathcal {A}}_n\). Let \({\mathcal {D}}_d\) be the subalgebra of \({\mathcal {M}}_d\) of diagonal matrices.

$$\begin{aligned} {\mathcal {D}}_{e}:= \bigvee _{n\in \mathbb {N}} j_{n}\left( {\mathcal {D}}_d\right) \equiv \bigotimes _{n\in \mathbb {N}} {\mathcal {D}}_d, \end{aligned}$$

Consider matrices \(A=(a_{ij})\) and \(B=(b_{ij})\) belonging to \({\mathcal {M}}_d\). The Schur product, denoted as \(A\diamond B\), is defined as follows:

$$\begin{aligned} A\diamond B:= \left( a_{ij}b_{ij}\right) \in {\mathcal {M}}_d. \end{aligned}$$
(12)

Considering the expression

$$\begin{aligned} (A\otimes B)_{(i,j)(k,l)}=a_{ik}b_{jl}, \end{aligned}$$

it is possible to extend of Schur multiplication [14, 15] through the introduction of a mapping denoted as m: \({\mathcal {M}}_d \otimes {\mathcal {M}}_d \rightarrow {\mathcal {M}}_d\), defined by

$$\begin{aligned} m(A\otimes B)_{ij}= (A\otimes B)_{(i,i)(j,j)}. \end{aligned}$$
(13)

In the event that \(\Pi : {\mathcal {M}}_d \rightarrow {\mathcal {M}}_d\) maintains Schur identity, the mapping \({\mathcal {E}}:, {\mathcal {M}}_d \rightarrow {\mathcal {M}}_d\) expressed as

$$\begin{aligned} {\mathcal {E}}= m\diamond (id \otimes \Pi ) \end{aligned}$$
(14)

demonstrates complete positivity and identity preservation (with ”\(\diamond\)” representing the composition of maps), thereby qualifying as a transition expectation.

Assume that \(\varphi _{0}\) is an initial state on \({\mathcal {A}}_0\). Let \(\{\Pi _n\}\) be a sequence of Schur identity preserving maps. Let \({\mathcal {E}}_n: {\mathcal {A}}_n\otimes {\mathcal {A}}_{n+1}\rightarrow {\mathcal {A}}_n\) be a transition expectation associated with \(\Pi _n\) defined by (14). Then a state \(\varphi\) on \({\mathcal {A}}_{\mathbb {N}}\) is called (inhomogeneous) entangled quantum Markov chain if one has

$$\begin{aligned} \varphi \left( a_0\otimes a_1\otimes \cdots a_n\right) =\phi _{0}\left( {\mathcal {E}}_0\left( a_{0} \otimes {\mathcal {E}}_1\left( a_{1} \otimes \cdots \otimes {\mathcal {E}}_{n-1}\left( a_{n-1} \otimes {\mathcal {E}}_{n}\left( a_{n} \otimes 1\right) \right) \cdots \right) \right) \right) , \end{aligned}$$

for every \(a_{0},a_1, \cdots , a_{n} \in {\mathcal {M}}_d.\)

We notice that the notion of entangled quantum Markov chain was first introduced and investigated in [5, 7, 8].

Let \(\pi =(\pi _i)_{i\in D}\) be a probability measure on \(D:= \{1,\dots , m+1\}\) and \(\{\mathbb {P}_{n,X}=(P_{ij,X}^{(n)})\}\) be a sequence of \((m+1)\times (m+1)\) stochastic matrices defined in the previous section. Consider the map

$$\begin{aligned} \Pi _n(a) = \sum _{ij}\left( \sum _{kl}\sqrt{P_{ik,X}^{(n)}P_{jl,X}^{(n)}}a_{kl}\right) E_{ij} \end{aligned}$$
(15)

Let \({\mathcal {E}}_n\) be the transition expectation associated with \(\Pi _n\) through (14), i.e.

$$\begin{aligned} {\mathcal {E}}_n(a_n\otimes a_{n+1}) = a_n\diamond \Pi _n(a_{n+1})= \sum _{ij}\left( \sum _{kl}\sqrt{P_{ik,X}^{(n)}P_{jl,X}^{(n)}}a_{n; ij}a_{n+1; kl}\right) E_{ij} \end{aligned}$$
(16)

for every \(a_n = (a_{n;ij})\in {\mathcal {A}}_n, a_{n+1} = (a_{n+1;ij})\in {\mathcal {A}}_{n+1}\).

The backward and forward Markov operators associated with \({\mathcal {E}}_n\) are, respectively, defined by

$$\begin{aligned} {\mathcal {P}}_{b;n}(a_{n+1}) = {\mathcal {E}}_n(\textbf{1 }\!\!\!\textrm{I}_n\otimes a_{n+1}) = \sum _{i}\left( \sum _{kl}\sqrt{P_{ik,X}^{(n)}P_{jl,X}^{(n)}}a_{n+1; kl}\right) E_{ii} \end{aligned}$$
(17)
$$\begin{aligned} {\mathcal {P}}_{f;n}(a_{n}) = {\mathcal {E}}_n(a_n\otimes \textbf{1 }\!\!\!\textrm{I}_{n+1}) = \sum _{ij}\left( \sum _{k}\sqrt{P_{ik,X}^{(n)}P_{jk,X}^{(n)}}a_{n; ij}\right) E_{ij} \end{aligned}$$
(18)

Definition 4.1

Consider states \(\varphi\) and \(\psi\) on \({\mathcal {A}}_{\mathbb {N}}\). We designate \(\varphi\) as \(\psi\)mixing if, for any \(n,m\in \mathbb {N}\), \(a_{[0,n]}\in {\mathcal {A}}_{[0,n]}\), \(b_{[0,m]}\in {\mathcal {A}}_{[0,m]}\), the following condition holds:

$$\begin{aligned} \lim_{N\rightarrow \infty }\varphi \left( a_{[0,n]}\sigma ^{n+N} \left( b_{[0,m]}\right) \right) = \varphi \left( a_{[0,n]}\right) \psi \left( b_{[0,m]}\right) . \end{aligned}$$
(19)

In the above definition if the states \(\varphi\) and \(\psi\) coincide, we obtain the usual mixing property of the state \(\varphi\). Notice that the notion of \(\psi\)–mixing was introduced in [31] for the homogeneous entangled quantum Markov chains.

Theorem 4.2

Let W be a F-stochastic operator such that \(W\in {\mathcal {F}}_{\varphi , g}\) with

$$\begin{aligned} \sum _{n=1}^{\infty } g^{n}(x)<\infty , \ \ \ \forall x\in [0,1]. \end{aligned}$$
(20)

Assume that \(\{\mathbb {P}_{n,X}\}\) is an associated NHMC. Then the corresponding inhomogeneous entangled QMC is \(\psi\)-mixing.

Proof

Let \(m\in \mathbb {N}\). Let \(a=a_0\otimes a_1\otimes \cdots \otimes a_m, b = b_0\otimes b_1\otimes \cdots \otimes b_m\in {\mathcal {A}}_{[0,m]}\). For \(n\in \mathbb {N}\), we have

$$\begin{aligned} \varphi (a\sigma ^n(b)) = \varphi \left( a\otimes \textbf{1 }\!\!\!\textrm{I}_{m+1}\otimes \cdots \textbf{1 }\!\!\!\textrm{I}_{n-2}\otimes {\hat{b}}_n\right) \end{aligned}$$

where

$$\begin{aligned} {\hat{b}}_n = {\mathcal {P}}_{b,n-1}\Big ({\mathcal {E}}_n\Big (j_n(b_0)\otimes {\mathcal {E}}_{n+1}\Big (j_{n+1}(b_1)\otimes \cdots \otimes {\mathcal {E}}_{n+m}\Big (j_{n+m}(b_m)\otimes \textbf{1 }\!\!\!\textrm{I}\Big )\cdots \Big )\Big )\Big ) \end{aligned}$$

Then

$$\begin{aligned} \varphi (a\sigma ^n(b))= & {} \varphi \left( a\otimes \textbf{1 }\!\!\!\textrm{I}_{m+1}\otimes \cdots \textbf{1 }\!\!\!\textrm{I}_{n-2}\otimes {\hat{b}}_n\right) \\= & {} \phi _0\left( {\mathcal {E}}_0\left( a_0\otimes {\mathcal {E}}_1\left( a_1\otimes \cdots {\mathcal {E}}_m(a_m\otimes {\mathcal {E}}_{m+1}(\textbf{1 }\!\!\!\textrm{I}_{m+1}\otimes \cdots \otimes {\mathcal {E}}_{n-2}(\textbf{1 }\!\!\!\textrm{I}_{n-2}\otimes {\hat{b}}_n)\cdots )\right) \cdots )\right) \right) \\= & {} \phi _0\left( {\mathcal {E}}_0(a_0\otimes {\mathcal {E}}_1(a_1\otimes \cdots {\mathcal {E}}_m(a_m\otimes {\mathcal {P}}_{b;m+1}({\mathcal {P}}_{b;m+2}(\cdots {\mathcal {P}}_{b;n-2}({\hat{b}}_n)\cdots )))\cdots ))\right) \\ \end{aligned}$$

Remark that, for any diagonal matrix \(c = \sum _jc_{j}E_{jj}\in {\mathcal {D}}_d\) and \(\ell \in \mathbb {N}\), we have

$$\begin{aligned} {\mathcal {P}}_{b; \ell }(c) = \sum _{i,j}\Pi _{\ell ; ij}c_{j}E_{ii} \end{aligned}$$

Recursive iterations lead to

$$\begin{aligned} {\mathcal {P}}_{b;m+1}\left( {\mathcal {P}}_{b;m+2}\left( \cdots {\mathcal {P}}_{b;n-2} \left( c\right) \cdots \right) \right) =\sum _{i,j}\left( P^{m+1,n-2}_{X}\right) ^*_{ij}c_{jj} E_{ii}. \end{aligned}$$
(21)

Hence, one finds

$$\begin{aligned} {\hat{b}}_n= & {} {\mathcal {P}}_{b; n-1}\left( {\mathcal {E}}_n(b_0\otimes {\mathcal {E}}_{n+1}(b_1\otimes \cdots {\mathcal {E}}_{n+m-1}(b_{m-1}\otimes {\mathcal {E}}_{n+m}(b_m\otimes \textbf{1 }\!\!\!\textrm{I}))\cdots )) \right) \\= & {} \textbf{1 }\!\!\!\textrm{I}\diamond \Big ( \sqrt{\mathbb {P}_{n-1,X}} \Big (b_0\diamond \Big (\sqrt{\mathbb {P}_{n,X}} \cdots b_m\diamond \Big ( \sqrt{\mathbb {P}_{n+m,X}}\sqrt{\mathbb {P}_{n+m,X}}^{*}\Big )\cdots \sqrt{\mathbb {P}_{n,X}}^{*}\Big )\sqrt{\mathbb {P}_{n-1,X}}^{*}\Big )\Big ) \end{aligned}$$

Then due to Theorem 3.5 we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty }{\hat{b}}_n= & {} \textbf{1 }\!\!\!\textrm{I}\diamond \Big ( \sqrt{P} \Big (b_0\diamond \Big (\sqrt{P} \cdots b_m\diamond \Big ( \sqrt{\Pi }\sqrt{P}^{*}\Big )\cdots \sqrt{P}^{*}\Big )\sqrt{P}^{*}\Big )\Big )\nonumber \\= & {} {\mathcal {E}}\Big (\textbf{1 }\!\!\!\textrm{I}\otimes {\mathcal {E}}\Big (b_0\otimes \cdots \otimes {\mathcal {E}}\Big (b_{m-1}\otimes {\mathcal {E}}\Big (b_m\otimes \textbf{1 }\!\!\!\textrm{I}\Big )\Big )\Big )\Big ) =: {\hat{b}} \end{aligned}$$
(22)

where \({\mathcal {E}}\) is the entangled transition expectation associated with the stochastic matrix P, i.e.

$$\begin{aligned} {\mathcal {E}}(x\otimes y) = x \diamond \Big (\sqrt{P}\, y \, \sqrt{P}\Big ); \quad \forall x,y\in {\mathcal {M}}_{m+1}. \end{aligned}$$
(23)

On the other hand, again by Theorem 3.5 we have \({P}^{m,n}_{X}\longrightarrow P\) as \(n\rightarrow \infty\).

Let \(\psi _{i}(b): = \sum _{j}P_{ij}{\hat{b}}_{jj}\), where, as before, P is a rank-one stochastic matrix then \(P = 1_{m+1}\otimes p_{\infty }\), where \(1_d =\left( \begin{array}{c} 1 \\ \vdots \\ 1 \\ \end{array} \right) .\) and \(p_{\infty }=(1,0,\dots ,0)\). Then \(\psi _{i}(A) = \psi _{\infty }(A):= A_{11}\) for every \(i\in \{1,2\dots , d\}\). Put

$$\begin{aligned} \psi (b):= \psi _{\infty }\left( {\mathcal {E}}\Big (\textbf{1 }\!\!\!\textrm{I}\otimes {\mathcal {E}}\Big (b_0\otimes \cdots \otimes {\mathcal {E}}\Big (b_{m-1}\otimes {\mathcal {E}}\Big (b_m\otimes \textbf{1 }\!\!\!\textrm{I}\Big )\Big )\Big )\Big )\right) \end{aligned}$$
(24)

Let \(\alpha _n:=\phi _0({\mathcal {E}}_0(a_0\otimes {\mathcal {E}}_1(a_1\otimes \cdots {\mathcal {E}}_m(a_m\otimes {\mathcal {P}}_{b;m+1}({\mathcal {P}}_{b;m+2}(\cdots {\mathcal {P}}_{b;n-2}({\hat{b}})\cdots )))\cdots )))\) Since \({\hat{b}}\) is a diagonal matrix, from (21), it follows that

$$\begin{aligned} \alpha _n = \phi _0\left( {\mathcal {E}}_0\left( a_0\otimes {\mathcal {E}}_1\left( a_1\otimes \cdots {\mathcal {E}}_m\Big (a_m\otimes \sum _{i,j} \left( P^{m+1,n-2}_{X}\right) ^*_{ij}{\hat{b}}_{jj} E_{ii} \cdots \right) \right) \right) \end{aligned}$$

From the above consideration, we infer that

$$\begin{aligned} \lim _{n\rightarrow \infty }\alpha _n = \phi _0\left( {\mathcal {E}}_0\left( a_0\otimes {\mathcal {E}}_1\left( a_1\otimes \cdots {\mathcal {E}}_m\Big (a_m\otimes \sum _{i,j}\psi _{\infty }({\hat{b}}) E_{ii}\Big ) \cdots \right) \right) \right) = \varphi (a)\psi (b) \end{aligned}$$

On the other hand, one gets

$$\begin{aligned} \left| \varphi \left( a\sigma ^n(b)\right) - \alpha _n\right|= & {} \left| \phi _0({\mathcal {E}}_0(a_0\otimes \cdots {\mathcal {E}}_m\Big (a_m\otimes \sum _{i,j}\left( P^{m+1,n-2}_{X})^*_{ij}\left[ {\hat{b}}_n-{\hat{b}}\right] _{jj}E_{ii}\right) \right| \\\le & {} \left| \phi _0({\mathcal {E}}_0(a_0\otimes \cdots {\mathcal {E}}_m\Big (a_m\otimes \underbrace{\sum _{i,j}(P^{m+1,n-2}_{X})^*_{ij}E_{ii}}_{=\textbf{1 }\!\!\!\textrm{I}}\Big )\right| \left\| {\hat{b}}_n-{\hat{b}}\right\| \\\le & {} \left| \varphi (a)\right| \Vert {\mathcal {P}}_{b;n-1}({\hat{b}}_n-{\hat{b}})\Vert \overset{(4.13)}{\longrightarrow }0. \end{aligned}$$

Providing that

$$\begin{aligned} \left| \varphi (a\sigma ^n(b)) - \varphi (a)\psi (b)\right| \le \left| \varphi (a\sigma ^n(b)) - \alpha _n\right| + \left| \alpha _n - \varphi (a)\psi (b)\right| \end{aligned}$$

we conclude that \(\varphi\) is \(\psi\)-mixing. \(\square\)

5 Conclusion

In this paper, we have introduced a class of F-stochastic operators on a finite-dimensional simplex. Each of these operators is regular, ensuring that the species distribution in the next generation corresponds to the species distribution in the previous one in the long run. We propose a novel scheme to define a non-homogeneous Markov chain based on the F-stochastic operator and given initial data. Exploiting the uniform ergodicity of the non-homogeneous Markov chain, we have defined a non-homogeneous entangled quantum Markov chain. Furthermore, we have established that this non-homogeneous entangled quantum Markov chain exhibits a \(\psi\)-mixing property.