1 Introduction

We denote by \(M^n\) the set of n by n complex matrices. Given a fixed density matrix \(\beta :\mathbb {C}^n \rightarrow \mathbb {C}^n\) and a fixed unitary operator \(U : \mathbb {C}^n \otimes \mathbb {C}^n \rightarrow \mathbb {C}^n \otimes \mathbb {C}^n\), the transformation \(\Phi : M^n \rightarrow M^n\)

$$\begin{aligned} Q \rightarrow \Phi (Q) = \text {Tr}_2 (U ( Q \otimes \beta ) U^*) \end{aligned}$$

describes the interaction of Q with the external source \(\beta \).

We assume that all eigenvalues of \(\beta \) are strictly positive.

In [4], the model is precisely explained: Q is in the small system and \(\beta \) describes the environment. Then \( \Phi (Q)\) gives the output of the action of \(\beta \) in Q given the action of the unitary operator U.

Other related papers are [2, 3]. Our proof is of quite different nature than these other papers.

The main question is about the convergence of the iterates \(\Phi ^n (Q_0) \), when \(n \rightarrow \infty \), for any given \(Q_0\). It is natural to expect that any limit (if exists) is a fixed point for \(\Phi \).

Our purpose is to show the following theorem:

Theorem 1

Given a fixed density matrix \(\beta :\mathbb {C}^n \rightarrow \mathbb {C}^n\), for an open and dense set of unitary operators \(U : \mathbb {C}^n \otimes \mathbb {C}^n \rightarrow \mathbb {C}^n\otimes \mathbb {C}^n\) the transformation \(\Phi : M^n \rightarrow M^n\)

$$\begin{aligned} Q \rightarrow \Phi (Q) = \mathrm{Tr}_2 (U ( Q \otimes \beta ) U^*) \end{aligned}$$

has a unique fixed point \(Q_\Phi \). In the case \(n=2\), we present explicitly the analytic characterization of such family of U and also the explicit formula for \(Q_\Phi \).

This result implies one of the main results in [4] that we mentioned before.

2 The general dimensional case

Suppose V is a complex Hilbert space of dimension n \(\ge 2\) and \(\mathscr {L}(V)\) denotes the space of linear transformations of V in itself.

Then, \(\mathrm{Tr}_2:\mathscr {L}(V\otimes V)\rightarrow \mathscr {L}(V)\), given by \(\mathrm{Tr}_2 (A \otimes B)= \mathrm{Tr} (B) A\).

There is a canonical way to extend the inner product on V to \(V \otimes V\).

We fix a density matrix \(\beta \in \mathscr {L}(V)\). For each unitary operator \(U \in \mathscr {L}(V\otimes V)\), we denote by \(\Phi _U: \mathscr {L}(V)\rightarrow \mathscr {L}(V) \) the linear transformation

$$\begin{aligned} \Phi _U(A) = \mathrm{Tr}_2 ( U (A \otimes \beta ) U^*). \end{aligned}$$

We denote by \(\Gamma \subset \mathscr {L}(V)\) the set of density operators. It will be shown that \(\Phi _U\) preserves \(\Gamma \). As \(\Gamma \) is a convex compact space, it has a fixed point.

The set of unitary operators is denoted by \(\mathscr {U}\).

If A is such that \(\Phi _U (A)=A\), then it follows that the range of \( \Phi _U - I\) is smaller or equal to \(n^2-1\).

We will show that there exists a proper real analytic subset \(X \subset \mathscr {U} \) such that if U is not in X, then range of \(\Phi _U - I=n^2 -1\). In this case, the fixed point is unique. More precisely

$$\begin{aligned} X =\{ U \in \mathscr {U} :\, \text {range}\, (\Phi _U - I)< n^2 -1\}. \end{aligned}$$

This \(X\subset \mathscr {U}\) is an analytic set because it is described by equations given by the determinant of minors equal to zero. It is known that the complement of an analytic set, also known as a Zariski open set, is empty or is open and dense on the analytic manifold (see [1]). Therefore, to prove our main result, we have to present an explicit U such that range of \( (\Phi _U - I)\) is \( n^2 -1.\)

This will be the purpose of our reasoning described below.

The bilinear transformation \((A,B) \rightarrow \mathrm{Tr} (B) A\) from \(\mathscr {L}(V)\times \mathscr {L}(V)\) to \(\mathscr {L}(V)\) induces the linear transformation

$$\begin{aligned} \mathrm{Tr}_2 : \mathscr {L}(V\otimes V) = [\mathscr {L}(V) \otimes \mathscr {L}(V)]\rightarrow \mathscr {L}(V). \end{aligned}$$

Denote by \(e_1,e_2,\ldots ,e_n\) an orthonormal basis for V. We also denote \(L_{ij}\in \mathscr {L}(V)\) the transformation such that \(L_{ij}(e_j)=e_i\) and \(L_{ij}(e_k)=0\) if \(k \ne j\).

The \(L_{ij}\) provides a basis for \(\mathscr {L}(V)\).

If \(A \in \mathscr {L}(V)\), we can write \(A= \sum _{i,j} a_{ij} L_{ij}\) and we call \([a_{ij}]_{1\le i,j\le n}\) the matrix of A.

Note that \(e_i\otimes e_j\), \(1\le i,j\le n\) is an orthonormal basis of \(V \otimes V\). Moreover,

$$\begin{aligned} L_{ik} \otimes L_{jl} (e_k\otimes e_l )= e_i\otimes e_j, \end{aligned}$$

and

$$\begin{aligned} L_{ik} \otimes L_{jl} (e_p\otimes e_q )= 0 \quad \text {if} \ (p,q)\ne (k,l). \end{aligned}$$

It is also true that:

  1. (a)

    \(L_{ij} L_{pq}=0\) if \(j \ne p\),

  2. (b)

    \(L_{ij} L_{pj}=L_{iq},\)

  3. (c)

    Tr \((L_{ij})=0\) if \(i\ne j\) and Tr \((L_{ii})=1\).

One can see that \(L_{ik}\otimes L_{jl}\), \(1\le i,k,j,l\le n\) is a basis for \(\mathscr {L}(V\otimes V).\)

Given \(T\in \mathscr {L}(V\otimes V)\) denote \(T= \sum t_{i,j,k,l} L_{ik}\otimes L_{jl}\). Then,

$$\begin{aligned} \mathrm{Tr}_2 (T) = \sum t_{i,j,k,j} L_{ik}= \sum _{ik} \bigg (\sum _j t_{i,j,k,j} \bigg ) L_{ik}. \end{aligned}$$

In the appendix, we give a direct proof that: if \(A \in \Gamma \), then \(\Phi _U(A)\in \Gamma \), for all \(U \in \mathscr {U}.\)

Now we will express \(\Phi _U\) in coordinates. We choose an orthonormal base \(e_1,e_2,\ldots ,e_n\in V\) which diagonalize \(\beta \). That is

$$\begin{aligned} \beta = \sum _q \lambda _q L_{qq},\quad \lambda _q >0,\ \ 1\le q\le n,\ \ \sum _q \lambda _q=1. \end{aligned}$$

Given rs, \(1\le r,s\le n\), we will calculate \(\Phi _U (L_{rs})\).

Suppose \(U= \sum u_{i,j,k,l} L_{ik}\otimes L_{jl}\), then \(U^*= \sum \overline{u_{i,j,k,l}} L_{ik}\otimes L_{jl}\) and

$$\begin{aligned} (L_{rs}\otimes \beta ) U^* = \bigg (\sum _q \lambda _q L_{rs} \otimes L_{qq} \bigg ) U^*=\sum _j \lambda _j \overline{u_{k,l,s,j}} L_{rk} \otimes L_{jl}. \end{aligned}$$

Now, we write \(U= \sum u_{\alpha ,\beta ,\gamma ,\delta } L_{\alpha \gamma }\otimes L_{\beta \delta }\). Then, we get

$$\begin{aligned} U (L_{rs}\otimes \beta ) U^*= \sum \lambda _j u_{\alpha ,\beta ,r,j} \overline{u_{k,l,s,j}}L_{\alpha k} \otimes L_{\beta l}. \end{aligned}$$

Finally,

$$\begin{aligned} \Phi _U (L_{rs}) = \sum \lambda _j u_{\alpha ,l,r,j} \overline{u_{k,l,s,j}} L_{ \alpha k}= \sum _{\alpha ,k} \bigg ( \sum _{j,l} \lambda _j u_{\alpha ,l,r,j} \overline{u_{k,l,s,j}} \lambda _j \bigg )L_{ \alpha k}. \end{aligned}$$

As \(\Gamma \) is convex and compact and \(\phi _U\) is continuous as we said before there exists a fixed point \(A \in \Gamma \). In particular, the range of \(\phi _U\) is smaller or equal to \(n^2-1.\)

We will present an explicit U such that range of \( (\Phi _U - I)\) is \( n^2 -1.\)

This will be described by a certain kind of circulant unitary operator

Suppose \(u_1,u_2,\ldots ,u_{n^2}\) are complex numbers of modulus 1. We define U in the following way

$$\begin{aligned} \begin{array}{c} U(e_1\otimes e_1)= u_1 (e_1 \otimes e_2), U(e_1\otimes e_2)= u_2 (e_1 \otimes e_3),\ldots , U(e_1\otimes e_n)=u_n (e_2\otimes e_1),\\ U(e_2\otimes e_1)= u_{n+1} (e_2 \otimes e_2), U(e_2\otimes e_2)= u_{n+2} (e_2 \otimes e_3),\ldots , U(e_2\otimes e_n)=u_{2n} (e_3\otimes e_1),\\ \ldots \\ U(e_n\otimes e_1)= u_{n^2 -n+1} (e_n \otimes e_2), U(e_n\otimes e_2)= u_{n^2-n +2} (e_n \otimes e_3),\ldots , U(e_n\otimes e_n)=u_{n^2} (e_1\otimes e_1), \end{array} \end{aligned}$$

We will show that for some convenient choice of \(u_1,u_2,\ldots ,u_{n^2}\) we will get that the range of \(\Phi _U-I\) is \(n^2 -1\).

Suppose

$$\begin{aligned} U= \sum u_{i,j,k,l} L_{ik}\otimes L_{jl} , \end{aligned}$$

in this case

$$\begin{aligned} U(e_k\otimes e_l)= \sum _{i,j} u_{i,j,k,l} e_{i}\otimes e_{j}. \end{aligned}$$

By definition of U, we get

  1. (a)

    if \(l<n\), then \( u_{i,j,k,l}\ne 0\), if and only if, \(i=k\), \(j=l+1\);

  2. (b)

    if \(k<n\), then \( u_{i,j,k,n}\ne 0\), if and only if, \(i=k+1\), \(j=1\);

  3. (c)

    \( u_{i,j,n,n}\ne 0\), if and only if, \(i=j=1\).

For fixed rs such that \(1\le ,r,s\le n\) we get from (a)–(c):

\(1\le r <n, \) \(1\le s<n\), implies

$$\begin{aligned} \Phi _U(L_{rs})=\left( \sum _{j=1}^{n-1} u_{r,j+1,r,j } \overline{u_{s,j+1,s,j}}\lambda _j \right) L_{rs}+u_{r+1,1,r,n } \overline{u_{s+1,1,s,n}}\lambda _n L_{(r+1)(s+1)}, \end{aligned}$$

\(1\le s<n\), implies

$$\begin{aligned} \Phi _U(L_{ns})=\left( \sum _{j=1}^{n-1} u_{n,j+1,n,j } \overline{u_{s,j+1,s,j}}\lambda _j \right) L_{ns}+u_{1,1,n,n } \overline{u_{s+1,1,s,n}}\lambda _n L_{1(s+1)}, \end{aligned}$$

\(1\le r<n\), implies

$$\begin{aligned} \Phi _U(L_{rn})=\left( \sum _{j=1}^{n-1} u_{r,j+1,r,j } \overline{u_{n,j+1,n,j}}\lambda _j \right) L_{rn}+u_{r+1,1,r,n } \overline{u_{1,1,n,n}}\lambda _n L_{(r+1)1}. \end{aligned}$$

In particular for \(1 \le r<n\), we have \(\Phi _U(L_{rr})=(1-\lambda _n) L_{rr}+ \lambda _n L_{(r+1)(r+1)}.\) To show that the range of \(\Phi _U-I\) is \(n^2-1\) ,we will show that the \(\phi _U(L_{rs})-L_{rs}\) are linearly independent for \((r,s)\ne (n,n)\)

Suppose that

$$\begin{aligned} \sum _{(r,s)\ne (n,n)} c_{rs} (\phi _U(L_{rs})-L_{rs})=0. \end{aligned}$$

The coefficient of \(L_{11} \) is \(-\lambda _n c_{11}\), then \(c_{11}=0.\) The coefficient of \(L_{22} \) is \(\lambda _n c_{11}- \lambda _n c_{22}\), then \(c_{22}=0.\)

$$\begin{aligned} \ldots \end{aligned}$$

The coefficient of \(L_{nn} \) is \(\lambda _n c_{(n-1)(n-1)}\), then \(c_{(n-1)(n-1)}=0.\)

Then, we get that

$$\begin{aligned} \sum _{r\ne s} c_{rs} (\phi _U(L_{rs})-L_{rs})=0. \end{aligned}$$
(1)

We will divide the proof in several different cases.

(a) Case \(n=2\).

$$\begin{aligned} \sum _{r\ne s} c_{rs} (\phi _U(L_{rs})-L_{rs})=c_{12} (\phi _U(L_{12})-L_{12}) + c_{21} (\phi _U(L_{21})-L_{21}). \end{aligned}$$

By definition of U, we have that \( u_{1,2,1,1}=u_1, \) \( u_{2,1,1,2}=u_2, \) \( u_{2,2,2,1}=u_3, \) \( u_{1,1,2,2}=u_4. \)

Therefore,

$$\begin{aligned} \phi _U (L_{12})- L_{12}= (u_1 \overline{u_3} \lambda _1 -1) L_{12} + u_2 \overline{u_4} \lambda _2 L_{21} \end{aligned}$$

and

$$\begin{aligned} \phi _U (L_{21})- L_{21}= (u_3 \overline{u_1} \lambda _1 -1) L_{21} + u_4 \overline{u_2} \lambda _2 L_{12}. \end{aligned}$$

From (1), it follows that

$$\begin{aligned} (u_1 \overline{u_3} \lambda _1 -1) c_{12} + u_4 \overline{u_2} \lambda _2 c_{21}= & {} 0 \\ u_2 \overline{u_4} \lambda _2 c_{12} + (u_3 \overline{u_1} \lambda _1 -1) c_{21}= & {} 0. \end{aligned}$$

Taking U such that \(u_1=i\), \(u_2=u_3=u_4=1\), it is easy to see that the determinant of the above system is not equal to zero. Then we get that \(c_{12}=c_{21}=0.\)

Then, we get a U with maximal range.

(b) Case \(n>2\).

We choose \(u_1,u_2,\ldots ,u_{n^2}\) according to Lemma 1 below.

The equations we consider before can be written as

\(1\le r<n\), \(1\le s<n\), \(r\ne s\), then, \(\Phi _U(L_{rs}) -L_{rs} = (a_{rs} -1) L_{rs} + b_{rs} L_{(r+1) (s+1)}, \)

\(1\le s<n\), then, \(\Phi _U(L_{ns}) -L_{ns} = (a_{ns} -1) L_{ns} + b_{ns} L_{1 (s+1)}, \)

\(1\le r<n\), then, \(\Phi _U(L_{rn}) -L_{rn} = (a_{rn} -1) L_{rn} + b_{rn} L_{(r+1) 1}.\)

For instance

$$\begin{aligned} a_{rs} = \sum _{j=1}^{n-1} u_{r,j+1,r,j } \overline{u_{s,j+1,s,j}}\lambda _j , \end{aligned}$$

and

$$\begin{aligned} b_{rs} =u_{r+1,1,r,n } \overline{u_{s+1,1,s,n}} \lambda _n. \end{aligned}$$

Note that \(u_{r,j+1,r,j } \overline{u_{s,j+1,s,j}}\) has modulus one and also \( u_{r+1,1,r,n } \overline{u_{s+1,1,s,n}}\).

Moreover, \(|b_{rs} |= \lambda _n>0\) and \(|a_{rs}|< \lambda _1 +\cdots +\lambda _{n-1}\). Indeed, note first that the products \(u_{r,j+1,r,j } \overline{u_{s,j+1,s,j}}\) are different by the choice of the \(u_{i,j,k,l}\) (see Lemma 1). Furthermore, by Lemma 2, we get that \(|a_{rs} |\) can not be equal to \(\lambda _1 +\cdots +\lambda _{n-1}\).

Therefore, \(| a_{rs}-1| \ge 1 - |a_{rs}|>1 - \sum _{q=1}^{n-1} \lambda _q=\lambda _n =| b_{ij}|>0,\) for all rsij and \(r\ne s\), \(i\ne j.\)

Suppose \(2\le k\le n.\)

Remember that the \(L_{ij}\) define a linear independent set.

The coefficient of \(L_{1k}\) in (1) is

$$\begin{aligned} c_{1k}(a_{1k}-1)+ c_{n (k-1) } b_{n (k-1)}=0. \end{aligned}$$

The coefficient of \(L_{n(k-1)}\) in (1) is

$$\begin{aligned} c_{n(k-1)}(a_{n(k-1)}-1)+ c_{(n-1) (k-2) } b_{(n-1) (k-1)}=0. \end{aligned}$$

The coefficient of \(L_{(n-k+2)1}\) in (1) is

$$\begin{aligned} c_{(n-k+2)1}(a_{(n-k+2)1}-1)+ c_{(n-k-1)n } b_{(n-k+1) n}=0. \end{aligned}$$

The coefficient of \(L_{(n-k +1)n}\) in (1) is

$$\begin{aligned} c_{(n-k+1)n}(a_{(n-k+1)n}-1)+c_{(n-k)(n-1) }b_{(n-k) (n-1)}=0. \end{aligned}$$

The coefficient of \(L_{(n-k)(n-1)}\) in (1) is

$$\begin{aligned}&c_{(n-k)(n-1)}(a_{(n-k)(n-1)}-1)+ c_{(n-k-1) (n-2) } b_{(n-k-1) (n-2)}=0.\\&\ldots \end{aligned}$$

The coefficient of \(L_{2(k+1)}\) in (1) is

$$\begin{aligned} c_{2(k+1)}(a_{2(k+1)}-1)+ c_{1 k } b_{1 k}=0. \end{aligned}$$

If \(c_{1k}\ne 0\), then, from above, we get \(|c_{1k}|< |c_{n(k-1)}|<\cdots <|c_{2(k+1)}|<|c_{1k}|.\)

Then, we get a contradiction. It follows that \(c_{1k}=0\).

Therefore,

$$\begin{aligned} c_{n(k-1)}= c_{(n-1)(k-2)}= \cdots =c_{(n-k+2)1}= c_{(n-k+1)n}= c_{(n-k)(n-1)}= \cdots =c_{2(k+1)}=0. \end{aligned}$$

From this, it follows that \(c_{rs}=0\) for all rs, when \(r \ne s\). This shows that for such U, we have maximal range equal to \(n^2-1\).

Now we will prove two Lemmas that we used before.

Lemma 1

Given \(m \ge 2\), there exist complex numbers \(u_1,\ldots ,u_m\) of modulus 1, such that, if \(1\le i \ne j\le m, \) \(1 \le k \ne l\le m\) and \(u_i \overline{u_j} = u_k \overline{u_l}\), then \(i=k, j=l\).

Proof

The proof is by induction on m

For \(m=2\), just take \(u_1 \overline{u_2} \) not in \(\mathbb {R}.\)

Suppose the claim is true for \(m \ge 2\) and \(u_1,\ldots ,u_m\) the corresponding ones.

Consider

$$\begin{aligned} S =\{u_i \overline{u_j} | 1 \le i,j\le m \} \end{aligned}$$

and

$$\begin{aligned} T =\{u_pu_q | 1 \le p,q\le m \}. \end{aligned}$$

Then, take \(u_{m+1}\) such that \(u_{m+1} \overline{u_p} \) is not in S for all \(1\le p\le m\), and \( u^2_{m+1} \) is not in T.

Then, \(u_1,\ldots ,u_m,u_{m+1}\) satisfy the claim.

\(\square \)

Lemma 2

Consider \(\lambda _1,\ldots ,\lambda _m\), real positive numbers and \(z_1,\ldots ,z_m\), complex numbers of modulus 1.

Suppose \(| \sum _{j=1}^m \lambda _j z_j| = \sum _{j=1}^m \lambda _j\), then \(z_1=z_2=\cdots =z_m.\)

Proof

The proof is by induction on m.

It is obviously true for \(m=1\).

Suppose the claim is true for \(m-1\) and we will show is true for m.

Note that

$$\begin{aligned} \sum _{j=1}^m \lambda _j = \left| \sum _{j=1}^m \lambda _j z_j \right| \le \left| \sum _{j=1}^{m-1} \lambda _j z_j \right| + \lambda _m\le \sum _{j=1}^m \lambda _j. \end{aligned}$$

From this follows that

$$\begin{aligned} \left| \sum _{j=1}^{m-1} \lambda _j z_j \right| = \sum _{j=1}^{m-1} \lambda _j. \end{aligned}$$

Then, \(z_1=z_2=\cdots =z_{m-1}=z\).

Therefore,

$$\begin{aligned} \sum _{j=1}^m \lambda _j = \left| z \sum _{j=1}^{m-1} \lambda _j + z_m \lambda _m \right| \le \left| z \sum _{j=1}^{m-1} \lambda _j \right| + |z_m \lambda _m|= \sum _{j=1}^m \lambda _j.\end{aligned}$$

Given \(v_1,v_2\) complex numbers such that \(|v_1+ v_2|=|v_1| + |v_2|\), then they have the same argument.

Then, there exists an \(s>0\) such that \(z \sum _{j=1}^{m-1} \lambda _j= s z_m \lambda _m\).

Now, taking modulus on both sides of the expression above, we get

$$\begin{aligned} \sum _{j=1}^{m-1} \lambda _j=\left| z \sum _{j=1}^{m-1} \lambda _j \right| =|s z_m \lambda _m|= s \lambda _m. \end{aligned}$$

From this follows that \(z_m=z\)

\(\square \)

3 The two-dimensional case: explicit results

Our main interest in this section is to present the explicit expression of the unique fixed point U. We restrict ourselves to the two-dimensional case.

We will consider a two-by-two density matrix \(\beta \) such that is diagonal in the basis \(f_1\in \mathbb {C}^2\), \(f_2\in \mathbb {C}^2\). Without lost of generality, we can consider that

$$\begin{aligned} \beta = \left( \begin{array}{c@{\quad }c} p_1 &{} 0\\ 0 &{} p_2 \end{array} \right) , \end{aligned}$$

\(p_1,p_2>0\). We will describe initially in coordinates some of the definitions which were used before in the paper.

If

$$\begin{aligned} R= \left( \begin{array}{c@{\quad }c} R_{11} &{} R_{12}\\ R_{21} &{} R_{22} \end{array} \right) , \end{aligned}$$

and

$$\begin{aligned} S=\left( \begin{array}{c@{\quad }c} S_{11} &{} S_{12}\\ S_{21} &{} S_{22} \end{array} \right) , \end{aligned}$$

then

$$\begin{aligned} R \otimes S= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} R_{11} S_{11} &{} R_{11} S_{12} &{} R_{12}S_{11} &{} R_{12}S_{12}\\ R_{11} S_{21} &{} R_{11} S_{22} &{} R_{12} S_{21}&{} R_{12}S_{22}\\ R_{21} S_{11} &{} R_{21}S_{12} &{} R_{22} S_{11} &{} R_{22}S_{12}\\ R_{21} S_{21}&{} R_{21} S_{22}&{} R_{22}S_{21} &{} R_{22} S_{22} \end{array} \right) \end{aligned}$$

and

$$\begin{aligned} \text {Tr}_2 ( R \otimes S)=\left( \begin{array}{c@{\quad }c} R_{11} (S_{11} + S_{22}) &{} R_{12}(S_{11} + S_{22}) \\ R_{21} (S_{11} + S_{22}) &{} R_{22} (S_{11} + S_{22}) \end{array} \right) .\end{aligned}$$

Given

$$\begin{aligned} T= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} T_{11} &{} T_{12} &{} T_{13} &{} T_{14}\\ T_{21} &{} T_{22} &{} T_{23} &{} T_{24}\\ T_{31} &{} T_{32} &{} T_{33} &{} T_{34}\\ T_{41} &{} T_{42} &{} T_{43} &{} T_{44} \end{array} \right) \end{aligned}$$

then, in a consistent way, we have

$$\begin{aligned} \text {Tr}_2 (T)= \left( \begin{array}{c@{\quad }c} T_{11}+ T_{22} &{} T_{13}+ T_{24}\\ T_{31}+ T_{42} &{} T_{33}+ T_{44} \end{array} \right) \end{aligned}$$

The action of an operator U on \(M_2\otimes M_2\) in the basis \(e_1 \otimes f_1\), \(e_2 \otimes f_1\), \(e_1 \otimes f_2\), \(e_2 \otimes f_2\) is given by a 4 by 4 matrix U denoted by

$$\begin{aligned} U= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} U_{11}^{11} &{} U_{11}^{12} &{} U_{12}^{11} &{} U_{12}^{12}\\ U_{11}^{21} &{} U_{11}^{22} &{} U_{12}^{21} &{} U_{12}^{22}\\ U_{21}^{11} &{} U_{21}^{12} &{} U_{22}^{11} &{} U_{22}^{12}\\ U_{21}^{21} &{} U_{21}^{22} &{} U_{22}^{21} &{} U_{22}^{22} \end{array} \right) \end{aligned}$$

and

$$\begin{aligned} U^*= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \overline{U_{11}^{11}} &{} \overline{U_{11}^{21}} &{} \overline{ U_{21}^{11}} &{} \overline{U_{21}^{21}}\\ \overline{U_{11}^{12}} &{} \overline{U_{11}^{22}} &{} \overline{U_{21}^{12}} &{}\overline{ U_{21}^{22}}\\ \overline{U_{12}^{11}} &{} \overline{U_{12}^{21}} &{} \overline{ U_{22}^{11}} &{} \overline{U_{22}^{21}}\\ \overline{U_{12}^{12}} &{} \overline{U_{12}^{22}} &{} \overline{ U_{22}^{12}} &{} \overline{ U_{22}^{22}} \end{array} \right) \end{aligned}$$

If U is unitary then \(U U^*=I\). This relation implies the following set of equations:

$$\begin{aligned} (1)\quad U_{11}^{11} \overline{ U_{11}^{11}} + U_{11}^{1 2} \overline{U_{11}^{1 2}} + U_{1 2}^{11} \overline{ U_{1 2}^{11}} + U_{1 2}^{1 2} \overline{U_{1 2}^{1 2}}=1, \end{aligned}$$
$$\begin{aligned} (2)\quad U_{11}^{11} \overline{U_{11}^{2 1}} + U_{11}^{1 2} \overline{ U_{11}^{22} }+ U_{1 2}^{11} \overline{U_{1 2}^{2 1}} + U_{1 2}^{1 2} \overline{ U_{1 2}^{22}}=0, \end{aligned}$$
$$\begin{aligned} (3)\quad U_{11}^{11} \overline{U_{2 1}^{11}} + U_{11}^{1 2} \overline{U_{2 1}^{1 2}} + U_{1 2}^{11} \overline{ U_{22}^{11} }+ U_{1 2}^{1 2} \overline{U_{22}^{1 2}}=0, \end{aligned}$$
$$\begin{aligned} (4) \quad U_{11}^{11} \overline{U_{2 1}^{2 1}} + U_{11}^{1 2} \overline{ U_{2 1}^{22}} + U_{1 2}^{11} \overline{U_{22}^{2 1}} + U_{1 2}^{1 2} \overline{ U_{22}^{22}}=0, \end{aligned}$$
$$\begin{aligned} (5) \quad U_{11}^{2 1} \overline{U_{11}^{11}} + U_{11}^{22} \overline{ U_{11}^{1 2}} + U_{1 2}^{2 1} \overline{U_{1 2}^{11}} + U_{1 2}^{2 2} \overline{ U_{1 2}^{1 2}}=0, \end{aligned}$$
$$\begin{aligned} (6) \quad U_{11}^{2 1} \overline{U_{11}^{2 1}} + U_{11}^{22} \overline{ U_{11}^{22}} + U_{1 2}^{2 1} \overline{U_{1 2}^{2 1}} + U_{1 2}^{22} \overline{ U_{1 2}^{22}}=1, \end{aligned}$$
$$\begin{aligned} (7) \quad U_{11}^{2 1} \overline{ U_{2 1}^{11}} + U_{11}^{22} \overline{U_{2 1}^{1 2}} + U_{1 2}^{2 1} \overline{ U_{22}^{11} }+ U_{1 2}^{22} \overline{U_{22}^{1 2}}=0, \end{aligned}$$
$$\begin{aligned} (8)\quad U_{11}^{2 1} \overline{U_{2 1}^{2 1}} + U_{11}^{22} \overline{ U_{2 1}^{22}}+ U_{1 2}^{2 1} \overline{U_{22}^{2 1}} + U_{1 2}^{22} \overline{U_{22}^{22}}=0, \end{aligned}$$
$$\begin{aligned} (9) \quad U_{2 1}^{11} \overline{U_{11}^{11}} + U_{2 1}^{1 2} \overline{U_{11}^{1 2}} + U_{22}^{11} \overline{U_{1 2}^{11}} + U_{22}^{1 2} \overline{U_{1 2}^{1 2}}=0, \end{aligned}$$
$$\begin{aligned} (10) \quad U_{2 1}^{11} \overline{U_{11}^{2 1}} + U_{2 1}^{1 2} \overline{U_{11}^{22}} + U_{22}^{11} \overline{U_{1 2}^{2 1}} + U_{22}^{1 2} \overline{U_{1 2}^{22}}=0, \end{aligned}$$
$$\begin{aligned} (11) \quad U_{2 1}^{11} \overline{U_{2 1}^{11}} + U_{2 1}^{1 2} \overline{U_{2 1}^{1 2}} + U_{22}^{11} U_{22}^{11} + U_{22}^{1 2} \overline{U_{22}^{1 2}}=1, \end{aligned}$$
$$\begin{aligned} (12)\quad U_{2 1}^{11} \overline{U_{2 1}^{ 2 1}} + U_{ 2 1}^{ 1 2} \overline{ U_{ 2 1}^{22}} + U_{22}^{11} \overline{U_{22}^{ 2 1}} + U_{22}^{ 1 2} \overline{ U_{22}^{22}}=0, \end{aligned}$$
$$\begin{aligned} (13)\quad U_{ 2 1}^{ 2 1} \overline{U_{ 2 1}^{11}} + U_{ 2 1}^{22} \overline{ U_{ 2 1}^{ 1 2}} + U_{22}^{ 2 1} \overline{U_{22}^{11}} + U_{22}^{22} \overline{U_{22}^{ 1 2}}=0. \end{aligned}$$
$$\begin{aligned} (14)\quad U_{ 2 1}^{ 2 1} \overline{U_{11}^{11}} + U_{ 2 1}^{22} \overline{ U_{11}^{ 1 2}} + U_{22}^{ 2 1} \overline{U_{ 1 2}^{11}} + U_{22}^{22} \overline{U_{ 1 2}^{ 1 2}}=0. \end{aligned}$$
$$\begin{aligned} (15)\quad U_{ 2 1}^{ 2 1} \overline{U_{11}^{ 2 1}} + U_{ 2 1}^{22} \overline{ U_{11}^{22}} + U_{22}^{ 2 1} \overline{U_{ 1 2}^{ 2 1}} + U_{22}^{22} \overline{U_{ 1 2}^{22}}=0. \end{aligned}$$
$$\begin{aligned} (16)\quad U_{ 2 1}^{ 2 1} \overline{U_{ 2 1}^{ 2 1}} + U_{ 2 1}^{22} \overline{ U_{ 2 1}^{22}} + U_{22}^{ 2 1} \overline{U_{22}^{ 2 1}} + U_{22}^{22} \overline{U_{22}^{22}}=1. \end{aligned}$$

Equation (2) is equivalent to (5), equation (12) is equivalent to (13), equation (8) is equivalent to (15), equation (3) is equivalent to (9), equation (7) is equivalent to (10) and equation (4) is equivalent to (14). Then, we have six free parameters for the coefficients of U.

Using the entries \(U^{ij}_{rs}\) we considered above, we define

$$\begin{aligned} \tilde{L}(Q)= & {} p_1 \sum _{i=1}^2 \left( \begin{array}{cc} \overline{U}^{i1}_{11} &{} \overline{U}^{i1}_{ 21}\\ \overline{U}^{i1}_{ 12} &{} \overline{U}^{i1}_{22} \end{array} \right) Q \left( \begin{array}{cc} U^{i1}_{11} &{} U^{i1}_{ 12}\\ U^{i1}_{ 21} &{} U^{i1}_{22} \end{array} \right) \\&+ p_2 \sum _{i=1}^2 \left( \begin{array}{cc} \overline{U^{i2}_{11}} &{} \overline{U^{i2}_{ 21}}\\ \overline{U^{i2}_{ 12}} &{} \overline{U^{i2}_{22}} \end{array} \right) Q \left( \begin{array}{cc} U^{i2}_{11} &{} U^{i2}_{ 12}\\ U^{i2}_{ 21} &{} U^{i2}_{22} \end{array} \right) \end{aligned}$$

We can consider an auxiliary \(L_{ij}\) and express

$$\begin{aligned} \tilde{L}(Q)= & {} \sum _{i=1}^2 (\sqrt{p_1} (U^{i1})^*) Q (\sqrt{p_1} U^{i1})+ \sum _{i=1}^2 (\sqrt{p_2} (U^{i2})^*) Q (\sqrt{p_2} U^{i2})\\= & {} \sum _{i=1}^2 L_{i1}^* Q L_{i1} + \sum _{i=1}^2 L_{i2}^* Q L_{i2}=\sum _{i.j=1}^2 L_{ij}^* Q L_{ij}. \end{aligned}$$

From the fact that \(U U^* =I\), it follows (after a long computation) that

$$\begin{aligned} \tilde{L}(I) =I. \end{aligned}$$

Note that \( \tilde{L}\) preserve the cone of positive matrices.

Using the entries \(U^{ij}_{rs}\) described above, we denote

$$\begin{aligned} \hat{L}(Q)= & {} p_1 \sum _{i=1}^2 \left( \begin{array}{cc} U^{i1}_{11} &{} U^{i1}_{ 12}\\ U^{i1}_{ 21} &{} U^{i1}_{22} \end{array} \right) Q \left( \begin{array}{cc} \overline{U^{i1}_{11}} &{} \overline{U^{i1}_{ 21}}\\ \overline{U^{i1}_{ 12}} &{} \overline{U^{i1}_{22}} \end{array} \right) \\&+ p_2 \sum _{i=1}^2 \left( \begin{array}{cc} U^{i2}_{11} &{} U^{i2}_{ 12}\\ U^{i2}_{ 21} &{} U^{i2}_{22} \end{array} \right) Q \left( \begin{array}{cc} \overline{U^{i2}_{12}} &{} \overline{U^{i2}_{ 21}}\\ \overline{U^{i2}_{ 12}} &{} \overline{U^{i2}_{22}} \end{array} \right) =\sum _{i.j=1}^2 L_{ij} Q L_{ij}^*. \end{aligned}$$

One can also show that \(\hat{L} (Q) = \mathrm{Tr}_2 [ U (Q \otimes \beta ) U^* ] \) (see [4]).

The first expression is the Kraus decomposition and the second the Stinespring dilation.

Moreover \( \hat{L}\) preserves density matrices. This is proved in the appendix but we can present here another way to get that. If Q is a density matrix, then

$$\begin{aligned} \mathrm{Tr} (\hat{L}(Q))= & {} \mathrm{Tr} \left( \sum _{i.j=1}^2 L_{ij} Q L_{ij}^*\right) = \sum _{i.j=1}^2 \mathrm{Tr} (L_{ij} Q L_{ij}^*) =\sum _{i.j=1}^2 \mathrm{Tr} (Q L_{ij}^* L_{ij} ) \\= & {} \mathrm{Tr} \left( \sum _{i.j=1}^2 Q L_{ij}^* L_{ij} \right) =\mathrm{Tr} \left( Q \sum _{i.j=1}^2 L_{ij}^* L_{ij}\right) = \mathrm{Tr} (Q)=1\end{aligned}$$

We denote

$$\begin{aligned} Q = \left( \begin{array}{cc} Q_{11} &{} Q_{12}\\ Q_{21} &{} Q_{22} \end{array} \right) .\end{aligned}$$

Then,

$$\begin{aligned}&U^{ij} \,Q \, (U^{ij})^*= \left( \begin{array}{cc} U^{ij}_{11} &{} U^{ij}_{ 12}\\ U^{ij}_{ 21} &{} U^{ij}_{22} \end{array} \right) \left( \begin{array}{cc} Q_{11} &{} Q_{12}\\ Q_{21} &{} Q_{22} \end{array} \right) \left( \begin{array}{cc} \overline{U^{ij}_{11}} &{} \overline{U^{ij}_{ 21}}\\ \overline{U^{ij}_{ 12}} &{} \overline{U^{ij}_{22}} \end{array} \right) \\&\quad = \left( \begin{array}{cc} \overline{U^{ij}_{11}} ( U^{ij}_{11} Q_{11} + U^{ij}_{ 12} Q_{21} ) + \overline{U^{ij}_{12}} ( U^{ij}_{11} Q_{12} + U^{ij}_{12} Q_{22} ) &{}\overline{U^{ij}_{ 21}} ( U^{ij}_{11} Q_{11} + U^{ij}_{12} Q_{21} ) + \overline{ U^{ij}_{22} }( U^{ij}_{11} Q_{12} + U^{ij}_{12} Q_{22} ) \\ \overline{U^{ij}_{11}} ( U^{ij}_{ 21} Q_{11} + U^{ij}_{22} Q_{21} ) + \overline{U^{ij}_{ 12}} ( U^{ij}_{ 21} Q_{12} + U^{ij}_{22} Q_{22} ) &{} \overline{U^{ij}_{21}} ( U^{ij}_{21} Q_{11} + U^{ij}_{22} Q_{21} ) + \overline{U^{ij}_{22}} ( U^{ij}_{21} Q_{12} + U^{ij}_{22} Q_{22} ) \end{array} \right) , \end{aligned}$$

We have to compute

$$\begin{aligned} \hat{L} (Q) = p_1 [ U^{11} Q (U^{11})^* + U^{21} Q (U^{21})^* ] \,+ p_2 [ U^{12} Q (U^{12})^* + U^{22} Q (U^{22})^* ]. \end{aligned}$$

The coordinate \(a_{11}\) of \( \hat{L} (Q) \) is

$$\begin{aligned}&p_1 \big [ \overline{U^{11}_{11}} ( U^{11}_{11} Q_{11} + U^{11}_{ 12} Q_{21} ) + \overline{U^{11}_{12}} ( U^{11}_{11} Q_{12} + U^{11}_{12} Q_{22} ) \big ] \nonumber \\&\quad + p_1 \big [ \overline{U^{21}_{11}} ( U^{21}_{11} Q_{11} + U^{21}_{ 12} Q_{21} ) + \overline{U^{21}_{12}} ( U^{21}_{11} Q_{12} + U^{21}_{12} Q_{22} ) \big ] \nonumber \\&\quad + p_2 \big [ \overline{U^{12}_{11}} ( U^{12}_{11} Q_{11} + U^{12}_{ 12} Q_{21} ) + \overline{U^{12}_{12}} ( U^{12}_{11} Q_{12} + U^{12}_{12} Q_{22} ) \big ] \nonumber \\&\quad +p_2 \big [ \overline{U^{22}_{11}} ( U^{22}_{11} Q_{11} + U^{22}_{ 12} Q_{21} ) + \overline{U^{22}_{12}} ( U^{22}_{11} Q_{12} + U^{22}_{12} Q_{22} ) \big ]. \end{aligned}$$
(2)

The coordinate \(a_{12}\) is

$$\begin{aligned}&p_1 \big [ \overline{U^{11}_{21}} ( U^{11}_{11} Q_{11} + U^{11}_{ 12} Q_{21} ) + \overline{U^{11}_{22}} ( U^{11}_{11} Q_{12} + U^{11}_{12} Q_{22} ) \big ] \nonumber \\&\quad + p_1 \big [ \overline{U^{21}_{21}} ( U^{21}_{11} Q_{11} + U^{21}_{ 12} Q_{21} ) + \overline{U^{21}_{22}} ( U^{21}_{11} Q_{12} + U^{21}_{12} Q_{22} )\big ] \nonumber \\&\quad + p_2 \big [ \overline{U^{12}_{21}} ( U^{12}_{11} Q_{11} + U^{12}_{ 12} Q_{21} ) + \overline{U^{12}_{22}} ( U^{12}_{11} Q_{12} + U^{12}_{12} Q_{22} ) \big ] \nonumber \\&\quad + p_2 \big [ \overline{U^{22}_{21}} ( U^{22}_{11} Q_{11} + U^{22}_{ 12} Q_{21} ) + \overline{U^{22}_{22}} ( U^{22}_{11} Q_{12} + U^{22}_{12} Q_{22} ) \big ]. \end{aligned}$$
(3)

We will consider a parametrization of the density matrices taking \(Q_{11}= 1-Q_{22}\) and \(Q_{12} = \overline{Q_{21}}\).

The variable \(Q_{11}\) is positive in the real line and smaller than one. Indeed, by positivity of Q, we have \(0\le Q_{11} Q_{22}= Q_{11} (1- Q_{11})= Q_{11} - Q_{11}^2.\)

\(Q_{12} \) is in \(\mathbb {C}= \mathbb {R}^2\) but satisfying \(Q_{11} (1- Q_{11}) - Q_{12} \overline{Q}_{12} \ge 0 \) because we are interested in density matrices which are positive operators.

The numbers \(p_1\) and \(p_2\) are fixed. Consider the function G such that

$$\begin{aligned}&G(Q_{11},Q_{12}) = ( p_1 [ \overline{U^{11}_{11}} ( U^{11}_{11} Q_{11} + U^{11}_{ 12} \overline{Q_{12}} ) + \overline{U^{11}_{12}} ( U^{11}_{11} Q_{12} + U^{11}_{12} (1- Q_{11}) ) ] \\&\quad + p_1 [ \overline{U^{21}_{11}} ( U^{21}_{11} Q_{11} + U^{21}_{ 12} \overline{Q_{12}} ) + \overline{U^{21}_{12}} ( U^{21}_{11} Q_{12} + U^{21}_{12} (1-Q_{11}) ) ] \\&\quad + p_2 [ \overline{U^{12}_{11}} ( U^{12}_{11} Q_{11} + U^{12}_{ 12} \overline{Q_{12}} ) + \overline{U^{12}_{12}} ( U^{12}_{11} Q_{12} + U^{12}_{12} (1- Q_{11}) ) ] \\&\quad + p_2 [ \overline{U^{22}_{11}} ( U^{22}_{11} Q_{11} + U^{22}_{ 12} \overline{ Q_{12}} ) + \overline{U^{22}_{12}} ( U^{22}_{11} Q_{12} + U^{22}_{12} (1- Q_{11}) ) ],\\&p_1 [ \overline{U_{21}^{11}} ( U^{11}_{11} Q_{11} + U^{11}_{ 12} \overline{Q_{12}} ) + \overline{U^{11}_{22}} ( U^{11}_{11} Q_{12} + U^{11}_{12} (1- Q_{11}) ) ]\\&\quad + p_1 [ \overline{U^{21}_{21}} ( U^{21}_{11} Q_{11} + U^{21}_{ 12} \overline{Q_{12}} ) + \overline{U^{21}_{22}} ( U^{21}_{11} Q_{12} + U^{21}_{12} (1-Q_{11}) ) ]\\&\quad + p_2 [ \overline{U^{12}_{21}} ( U^{12}_{11} Q_{11} + U^{12}_{ 12} \overline{Q_{12}} ) + \overline{U^{12}_{22}} ( U^{12}_{11} Q_{12} + U^{12}_{12} (1- Q_{11}) ) ] \\&\quad +p_2 [ \overline{U^{22}_{21}} ( U^{22}_{11} Q_{11} + U^{22}_{ 12} \overline{ Q_{12}} ) + \overline{U^{22}_{22}} ( U^{22}_{11} Q_{12} + U^{22}_{12} (1- Q_{11}) ) ]) \end{aligned}$$

When there is a unique fixed point for G?

Example Suppose \(U= e^{i \beta \sigma ^x \otimes \sigma ^x}= \) \(\cos (\beta ) (I \otimes I) + i \sin (\beta ) (\sigma _x \otimes \sigma _x).\) In this case

$$\begin{aligned}U= \left( \begin{array}{l@{\quad }l@{\quad }l@{\quad }l} \cos \beta &{} 0 &{} 0 &{} i \sin \beta \\ 0 &{}\cos \beta &{} i \sin \beta &{} 0\\ 0 &{} i \sin \beta &{} \cos \beta &{} 0\\ i \sin \beta &{} 0 &{} 0 &{} \cos \beta \end{array} \right) \end{aligned}$$

Therefore,

$$\begin{aligned} G(Q_{11},Q_{12}) = ( ( p_1 - p_1 Q_{11} + p_2 - p_2 Q_{11}) ,\end{aligned}$$
$$\begin{aligned}&p_1 (\cos \beta )^2 Q_{12} +p_1 (\sin \beta )^2 \overline{Q_{12}} + p_2 (\sin \beta )^2 \overline{Q_{12}} + p_2 (\cos \beta )^2 Q_{12} )\\&\quad = ( 1-Q_{11} , p_1 (\cos \beta )^2 Q_{12} +p_1 (\sin \beta )^2 \overline{Q_{12}} + p_2 (\sin \beta )^2 \overline{Q_{12}} + p_2 (\cos \beta )^2 Q_{12} )\end{aligned}$$

One can easily see that given any \(a\in \mathbb {R}\) we have that \(Q_{11} =1/2\), and \(Q_{12}=a\) determine a fixed point for G. In order the fixed point matrix to be positive we need that \(-1/2< a<1/2\).

In this case, the fixed point is not unique.

It is more convenient to express G in terms of the variables \(Q_{11}\in [0,1]\), and \((a,b) \in \mathbb {R}^2\), where \(Q_{12}= a + b i\). As these parameters describe density matrices, there are some restrictions: \(1/4 \ge Q_{11} (1- Q_{11}) \ge (a^2 + b^2)\) and \(1\ge Q_{11} \ge 0\)

We denote by Re (z), the real part of the complex number z and by Im(z) its imaginary part.

In this case, we get

$$\begin{aligned}&G(Q_{11},a,b)= ( Q_{11} \alpha _1 + \beta _1 + (a_{11} + a_{12}) a + i (a_{11} -a _{12}) b , \\&\mathrm{Re} ( Q_{11} \alpha _2 + \beta _2 + (a_{21} + a_{22}) a + i (a_{21} -a _{22}) b ), \\&\mathrm{Im} ( Q_{11} \alpha _2 + \beta _2 + (a_{21} + a_{22}) a + i (a_{21} -a _{22}) b ) ). \end{aligned}$$

where

$$\begin{aligned} \alpha _1= & {} p_1 [ \overline{ U^{11}_{ 11}} U^{11}_{ 11}- \overline{ U^{11}_{ 12}} U^{11}_{ 12} + \overline{ U^{21}_{ 11 }} U^{21}_{ 11} - \overline{ U^{21}_{ 12 }} U^{21}_{ 12} ]\\&+ p_2 [ \overline{ U^{12}_{ 11}} U^{12}_{ 11}- \overline{ U^{12}_{ 12}} U^{12}_{ 12} + \overline{ U^{22}_{ 11 }} U^{22}_{ 11} - \overline{ U^{22}_{ 12 }} U^{22}_{ 12} ] , \\ \beta _1= & {} p_1 [ \overline{ U^{11}_{ 12}} U^{11}_{ 12}+ \overline{ U^{21}_{ 12}} U^{21}_{ 12} ] + p_2 [ \overline{ U^{12}_{ 12 }} U^{12}_{ 12} +\overline{ U^{22}_{ 12 }} U^{22}_{ 12} ],\\ \alpha _2= & {} p_1 [ \overline{ U_{21}^{ 11}} U^{11}_{ 11}- \overline{ U^{11}_{ 22}} U^{11}_{ 12} + \overline{ U^{21}_{ 21 }} U^{21}_{ 11} - \overline{ U^{21}_{ 22 }} U^{21}_{ 12} ] \\&+ p_2 [ \overline{ U^{12}_{ 21}} U^{12}_{ 11}- \overline{ U^{12}_{ 22}} U^{12}_{ 12} + \overline{ U^{22}_{ 21 }} U^{22}_{ 11} - \overline{ U^{22}_{ 22 }} U^{22}_{ 12} ], \\ \beta _2= & {} p_1 [ \overline{ U^{11}_{ 22}} U^{11}_{ 12}+ \overline{ U^{21}_{ 21}} U^{21}_{ 12} ] + p_2 [ \overline{ U^{12}_{ 22 }} U^{12}_{ 12} +\overline{ U^{22}_{ 22 }} U^{22}_{ 12} ] , \\ a_{11}= & {} p_1 [ \overline{ U^{11}_{ 12}} U^{11}_{ 11}+ \overline{ U^{21}_{ 12}} U^{21}_{ 11} ] + p_2 [ \overline{ U^{12}_{ 12 }} U^{12}_{ 11} +\overline{ U^{22}_{ 12 }} U^{22}_{ 11} ] ,\\ a_{12}= & {} p_1 [ \overline{ U^{11}_{ 11}} U^{11}_{ 12}+ \overline{ U^{21}_{ 11}} U^{21}_{ 12} ] + p_2 [ \overline{ U^{12}_{ 11 }} U^{12}_{ 12} +\overline{ U^{22}_{ 11 }} U^{22}_{ 12} ] ,\\ a_{21}= & {} p_1 [ \overline{ U^{11}_{22}} U^{11}_{ 11}+ \overline{ U^{21}_{ 22}} U^{21}_{ 11} ] + p_2 [ \overline{ U^{12}_{ 22 }} U^{12}_{ 11} +\overline{ U^{22}_{ 22 }} U^{22}_{ 11} ] , \\ a_{22}= & {} p_1 [ \overline{ U_{21}^{11}} U^{11}_{ 12}+ \overline{ U^{21}_{ 21}} U^{21}_{ 12} ] + p_2 [ \overline{ U^{12}_{ 21 }} U^{12}_{ 12} +\overline{ U^{22}_{ 21 }} U^ {22}_{ 12} ] , \end{aligned}$$

\(\alpha _1\) is a real number. As \(\Phi \) takes density matrices to density matrices, we have that \(\beta _1\) is also real.

Note that \(|\alpha _1|<1\) and \(1>\beta _1>0.\)

It is easy to see from the above equations that \((a_{11} + a_{12})\) and \( i (a_{11} -a _{12}) \) are both real numbers.

We are not able to say the same for \((a_{21} + a_{22}) a \) or \( i (a_{21} -a _{22}) b.\)

To find the fixed point, we have to solve

$$\begin{aligned} Q_{11} \alpha _1 + \beta _1 + (a_{11} + a_{12}) a + i (a_{11} -a _{12}) b= & {} Q_{11} \\ Q_{11} \alpha _2 + \beta _2 + (a_{21} + a_{22}) a + i (a_{21} -a _{22}) b= & {} a + bi , \end{aligned}$$

which means in matrix form

$$\begin{aligned} \left( \begin{array}{l@{\quad }l@{\quad }l} (\alpha _1-1) &{} a_{11} + a_{12} &{} i (a_{11} - a_{12})\\ \alpha _2 &{} a_{21} + a_{22} - 1 &{} i (a_{21} - a_{22} - 1) \end{array} \right) \left( \begin{array}{c} Q_{11} \\ a\\ b \end{array} \right) = \left( \begin{array}{c} - \beta _1 \\ - \beta _2\end{array} \right) .\end{aligned}$$

We are interested in real solutions \(Q_{11},a,b\).

In the case of the example mentioned above, one can show that \(\alpha _1=1\) and \(\alpha _0=0\) which means that in the expressions above, we get a set of two equation in two variables ab,

Remember that we are interested in matrices such that \(1/4 \ge Q_{11} (1- Q_{11}) \ge (a^2 + b^2).\) Notice that \(0\le Q_{11} \le 1\). As \(\Phi \) takes density matrices to density matrices, there is a fixed point for G by the Brower fixed point theorem. The main question is the conditions on U and \(\beta \) such that the fixed point is unique.

If there is a solution \((\hat{Q}_{11}, \hat{a}, \hat{b})\ne (0,0,0)\) in \(\mathbb {R}^3\) to the equations

$$\begin{aligned} \hat{Q}_{11} (\alpha _1 - 1) + (a_{11} + a_{12}) \hat{a} + i (a_{11} -a _{12}) \hat{b}= & {} 0 \nonumber \\ \hat{Q}_{11} \alpha _2 + (a_{21} + a_{22}- 1) \hat{a} + i (a_{21} -a _{22}-1) \hat{b}= & {} 0, \end{aligned}$$
(4)

then, the fixed point is not unique. The condition is necessary and sufficient.

A necessary condition for the fixed point to be unique is to be nonzero the determinant of the operator

$$\begin{aligned} K= \left( \begin{array}{l@{\quad }l@{\quad }l} a_{11} + a_{12} &{} i (a_{11} - a_{12})\\ a_{21} + a_{22} - 1 &{} i (a_{21} - a_{22} - 1) \end{array} \right) . \end{aligned}$$

Notice that if \((z_1,z_2)\) satisfies \(K(z_1,z_2)=(0,0)\), then \(\frac{z_1}{z_2}\) is real (because \(a_{11} + a_{12} \) and \(i (a_{11} - a_{12})\) are real). From this follows that there exists a solution \((a,b)\in \mathbb {R}^2\) in the kernel of K. In this case, (0, ab) is a nontrivial solution of (4).

The condition det \(K\ne 0\) is an open and dense property on the unitary matrices U. Indeed, there are six free parameters on the coefficients \(U_{rs}^{ij}\). Consider an initial unitary operator U. One can fix 5 of them and move a little bit the last one. This will change U and will move the determinant of \(K_U\) in such way that can avoid the value 0 for some small perturbation of the initial U.

Suppose U satisfies such property Det \(U\ne 0\). For each real value \(Q_{11}\), we get a different \((a_{Q_{11}},b_{Q_{11}})\) which is a solution of \(K(a,b)= (- Q_{11} (\alpha _1-1), - Q_{11} \alpha _2)\).

In this way, we get an infinite number of solutions \((Q_{11},a_{Q_{11}},b_{Q_{11}})\in \mathbb {R} \times \mathbb {C}^2\) to (4).

\(\alpha _2\) is not real.

But, we need solutions on \(\mathbb {R}^3\). Denote by \(S=S_U\) the linear subspace of vectors in \(\mathbb {C}^2\) of the form \(\rho (\alpha _1 -1 , \alpha _2),\) where \(\rho \) is complex.

Lemma 3

For an open and dense set of unitary U, we get that \(K^{-1} (S) \cap \mathbb { R}^2= \{(0,0)\}.\) For such U, suppose \((Q_{11},a,b) \) satisfies Eq. (4), then the non-trivial solutions \((\hat{a},\hat{b})\) of

$$\begin{aligned} K(\hat{a},\hat{b})= (- Q_{11} (\alpha _1-1), - Q_{11} \alpha _2) \end{aligned}$$

are not in \(\mathbb {R}^2\).

Proof

Suppose \(\frac{ 1-\alpha _1}{\alpha _2} = \alpha + \beta i= z^0=z_U^0.\) Note that for a generic U, we have that \(\alpha _2\ne 0.\)

We denote \(C_{11} = a_{11} + a_{12}\), \(C_{12} = i (a_{11} - a_{12})\), \(C_{21} = a_{21} + a_{22}-1 \) and finally \(C_{22} =i ( a_{21} - a_{22}- 1 ) \).

Suppose \((Q_{11},a,b) \in \mathbb {R}^3\) satisfies Eq. (4). We know that generically on U the value \(Q_{11}\) is not zero.

For each \(C_{ij}\), we denote \(C_{ij} = C_{ij}^1 + C_{ij}^2 i\), where \( i,j=1,2\).

If \(K(\hat{a},\hat{b})= (- Q_{11} (\alpha _1-1), - Q_{11} \alpha _2)\), then

$$\begin{aligned} C_{11} \hat{a} + C_{21} \hat{b}= z^0 ( C_{21} \hat{a} + C_{22} \hat{b})= (\alpha + \beta i) ( C_{21} \hat{a} + C_{22} \hat{b}).\end{aligned}$$

In this case

$$\begin{aligned} C_{11} \hat{a} + C_{21} \hat{b}= & {} (\alpha C_{21}^1 \hat{a} - \beta C_{21}^2 \hat{a} - \beta C_{22}^1 \hat{b}- \alpha C_{22}^2 \hat{b}) \\&+ \, i (\beta C_{21}^1 \hat{a} + \alpha C_{21}^2 \hat{a} + \alpha C_{22}^1 \hat{b}- \beta C_{22}^2 \hat{b}). \end{aligned}$$

If \(\hat{a}\) and \(\hat{b}\) are real, then, as \(C_{11}\) and \(C_{22}\) are real , then

$$\begin{aligned} (\beta C_{21}^1 + \alpha C_{21}^2) \hat{a} + (\alpha C_{22}^1 - \beta C_{22}^2) \hat{b}=0. \end{aligned}$$
(5)

Moreover,

$$\begin{aligned} (\alpha C_{21}^1 - \beta C_{21}^2- C_{11}) \hat{a} - (\beta C_{22}^1 - \alpha C_{22}^2 - C_{21} ) \hat{b}=0 \end{aligned}$$
(6)

If

$$\begin{aligned} \text {Det} \left( \begin{array}{ccc} \beta C_{21}^1 + \alpha C_{21}^2&{} \alpha C_{22}^1 - \beta C_{22}^2\\ \alpha C_{21}^1 - \beta C_{21}^2- C_{11} &{} \beta C_{22}^1 - \alpha C_{22}^2 - C_{21} \end{array} \right) \ne 0 , \end{aligned}$$

then just the trivial solution (0, 0) satisfies (5) and (6).

The above determinant is nonzero in an open and dense set of U.

Then, the solution \((Q_{11},a,b) \in \mathbb {R}^3\) of (4) has to be trivial. \(\square \)

Under these two assumptions on U (which are open and dense), the fixed point for G is unique. Then, it follows that the density matrix \(Q=Q_\Phi \) which is invariant for \(\Phi \) is unique. Given an initial \(Q_0\), any convergent subsequence \(\Phi ^{n_k}(Q_0),\) \(\kappa \rightarrow \infty \) will converge to the fixed point (because is unique).

As

$$\begin{aligned}&G(Q_{11},a,b)= ( Q_{11} \alpha _1 + \beta _1 + (a_{11} + a_{12}) a + i (a_{11} -a _{12}) b , \\&\mathrm{Re} ( Q_{11} \alpha _2 + \beta _2 + (a_{21} + a_{22}) a + i (a_{21} -a _{22}) b ), \\&\mathrm{Im} ( Q_{11} \alpha _2 + \beta _2 + (a_{21} + a_{22}) a + i (a_{21} -a _{22}) b ) ), \end{aligned}$$

one can find the explicit solution

$$\begin{aligned} Q_\Phi = \left( \begin{array}{c@{\quad }c} Q_{11} &{} a+ b i\\ a-b i&{} 1- Q_{11} \end{array} \right) \end{aligned}$$

by solving the linear problem \(G(Q_{11},a,b)=(Q_{11},a,b)\).