1 Introduction

The discrete-time quantum walk (QW) is a quantum version of the classical random walk and has been largely investigated for the last decade. The striking property of the QW is the spreading property of the walker. The standard deviation of the walker’s position grows linearly in time, quadratically faster than classical random walk, i.e., ballistic spreading. On the other hand, a walker stays at the starting position: localization occurs. Interestingly, a quantum walker has both ballistic spreading and localization. The review and books on QWs are Kempe [1], Kendon [2], Venegas-Andraca [3, 4], Konno [5], Cantero et al. [6], Manouchehri and Wang [7], Portugal [8]. It is known that the quaternion was discovered by Hamilton in 1843. Quaternions can be considered as an extension of complex numbers. As for a survey on quaternions and matrices of quaternions, see Zhang [9], for example. In this paper, we extend the QW to a walk given by a unitary matrix whose component is quaternion. Here, we call the introduced walk quaternionic quantum walk (QQW). As for the detailed definition of the QQW, see Sect. 2. We explore a relation between QWs and QQWs in the present manuscript.

From now on, we introduce some notations and a result on quaternions. Let \(\mathbb {R}\) be the set of real numbers. Let \(\mathbb {H}\) denote the set of quaternions of the form

$$\begin{aligned} x=x_0+x_1i+x_2j+x_3k, \end{aligned}$$

where \(x_0, x_1, x_2, x_3 \in \mathbb {R}\) and

$$\begin{aligned} i^2&= j^2 = k^2 = -1,\\ ij&= -ji = k, \quad jk= -kj =i, \quad ki=-ki = j. \end{aligned}$$

Then, a direct computation gives

Proposition 1.1

For \(x=x_0+x_1i+x_2j+x_3k \in \mathbb {H}\> (x_0, x_1, x_2, x_3 \in \mathbb {R})\),

$$\begin{aligned} x^2 = x_0^2 - x_1^2 - x_2^2 - x_3^2 + 2 x_0 (x_1 i + x_2 j + x_3 k). \end{aligned}$$

For \(x=x_0+x_1i+x_2j+x_3k \in \mathbb {H}\> (x_0, x_1, x_2, x_3 \in \mathbb {R})\), let

$$\begin{aligned} \overline{x} = x^{*} = x_0-x_1i-x_2j-x_3k \end{aligned}$$

be the conjugate of \(x\), and

$$\begin{aligned} |x| = \sqrt{x x^{*}} = \sqrt{x^{*} x} = \sqrt{x_0^2+x_1^2+x_2^2+x_3^2} \end{aligned}$$

be the modulus of \(x\). Moreover, we define \(\mathfrak {R}x = x_0,\) the real part of \(x\), and \(\mathfrak {I}x = x_1i+x_2j+x_3k,\) the imaginary part of \(x\).

Let \(\hbox {M}(n, \mathbb {C})\) and \(\hbox {M}(n, \mathbb {H})\) be the set of all \(n \times n\) matrices with complex and quaternion entries, respectively. For \(A=(a_{st}) \in \hbox {M}(n, \mathbb {H})\), we put \(\overline{A}= (\overline{a}_{st})=(a^{*}_{st})\) and \(A^{*} = {}^\mathrm{T} (\overline{A})\), where \({}^\mathrm{T} A\) denotes the transpose of \(A\). If \(A A^{*} = I,\) then \(A \in \hbox {M}(n, \mathbb {H})\) is said to be unitary, where \(I\) is the identity matrix. Let \(U(n, \mathbb {C})\) and \(U(n, \mathbb {H})\) denote the set of \(n \times n\) unitary matrices with complex and quaternion entries, respectively.

The discrete-time QW on \(\mathbb {Z}\) with two chiralities is defined by \(U \in U(2, \mathbb {C})\), which was first intensively studied by Ambainis et al. [10], where \(\mathbb {Z}\) be the set of integers. Our QQW can be determined by \(U \in U(2, \mathbb {H})\). Let \(H\) be the Hadamard gate, that is,

$$\begin{aligned} H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1&\quad 1 \\ 1&\quad -1 \end{bmatrix}. \end{aligned}$$

If \(U=H\), then the QQW becomes the Hadamard walk which has been well investigated in the study of QW.

The rest of the present paper is organized as follows. Section 2 gives the detailed definition of QQWs on \(\mathbb {Z}\). In Sect. 3, we present some results on QQWs. Proofs of Theorems 3.1 and 3.4 are given in Sects. 4 and 5, respectively. We consider stationary measures on QQWs for \(a=0\) (Sect. 6) and \(b=0\) (Sect. 7), respectively. Section 8 is devoted to summary.

2 Model

The discrete-time QQW is a quaternion version of the QW with additional degree of freedom called chirality. The chirality takes values left and right, and it means the direction of the motion of the walker. At each time step, if the walker has the left chirality, it moves one step to the left, and if it has the right chirality, it moves one step to the right. Let us define

$$\begin{aligned} |L\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \qquad |R\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \end{aligned}$$

where \(L\) and \(R\) refer to the left and right chirality states, respectively.

The walk is determined by \(U \in U(2, \mathbb {H})\), where

$$\begin{aligned} U = \begin{bmatrix} a&\quad b \\ c&\quad d \end{bmatrix}. \end{aligned}$$
(2.1)

To define the dynamics of our model, we divide \(U\) into two matrices:

$$\begin{aligned} P = \begin{bmatrix} a&\quad b \\ 0&\quad 0 \end{bmatrix}, \quad Q = \begin{bmatrix} 0&\quad 0 \\ c&\quad d \end{bmatrix}, \end{aligned}$$

with \(U =P+Q\). The important point is that \(P\) (resp. \(Q\)) represents that the walker moves to the left (resp. right) at any position at each time step.

Let \(\Psi _n (\in \mathbb {H}^{\mathbb {Z}})\) denote the state at time \(n\) of the QQW on \(\mathbb {Z}\):

$$\begin{aligned} \Psi _{n}&= {}^\mathrm{T}\!\left[ \dots ,\Psi _{n}^{L}(-1),\Psi _{n}^{R}(-1),\Psi _{n}^{L}(0),\Psi _{n}^{R}(0),\Psi _{n}^{L}(1),\Psi _{n}^{R}(1),\dots \right] , \\&= {}^\mathrm{T}\!\left[ \dots , \Psi _{n}(-1), \Psi _{n}(0),\Psi _{n}(1), \dots \right] , \\&= {}^\mathrm{T}\!\left[ \dots ,\begin{bmatrix} \Psi _{n}^{L}(-1)\\ \Psi _{n}^{R}(-1)\end{bmatrix},\begin{bmatrix} \Psi _{n}^{L}(0)\\ \Psi _{n}^{R}(0)\end{bmatrix},\begin{bmatrix} \Psi _{n}^{L}(1)\\ \Psi _{n}^{R}(1)\end{bmatrix},\dots \right] , \end{aligned}$$

where \(T\) means the transposed operation and \(\Psi _{n}(x) = {}^\mathrm{T}\![ \Psi _{n}^{L}(x), \Psi _{n}^{R}(x)]\) denotes the quaternion version of amplitude at time \(n\) and position \(x\). Then, the time evolution of the walk is defined by

$$\begin{aligned} \Psi _{n+1}(x)= P \Psi _{n} (x+1) + Q \Psi _{n}(x-1). \end{aligned}$$

That is,

$$\begin{aligned} \begin{bmatrix} \Psi _{n+1}^{L}(x)\\ \Psi _{n+1}^{R}(x) \end{bmatrix} = \begin{bmatrix} a \Psi _{n}^{L}(x+1)+b \Psi _{n}^{R}(x+1)\\ c \Psi _{n}^{L}(x-1)+d \Psi _{n}^{R}(x-1) \end{bmatrix}. \end{aligned}$$

Now, let

$$\begin{aligned} U^{(s)}=\begin{bmatrix} \ddots&\vdots&\vdots&\vdots&\vdots&\vdots&\dots \\ \dots&O&P&O&O&O&\dots \\ \dots&Q&O&P&O&O&\dots \\ \dots&O&Q&O&P&O&\dots \\ \dots&O&O&Q&O&P&\dots \\ \dots&O&O&O&Q&O&\dots \\ \dots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots \end{bmatrix}\;\;\; \hbox {with} \;\;\; O=\begin{bmatrix} 0&\quad 0\\ 0&\quad 0 \end{bmatrix}. \end{aligned}$$

Then, the state of the QQW at time \(n\) is given by

$$\begin{aligned} \Psi _{n}=(U^{(s)})^{n}\Psi _{0}, \end{aligned}$$
(2.2)

for any \(n\ge 0\). Let \(\mathbb {R}_{+}=[0,\infty )\). Here, we introduce a map \(\phi :(\mathbb {H}^{2})^{\mathbb {Z}}\rightarrow \mathbb {R}_{+}^{\mathbb {Z}}\) such that if

$$\begin{aligned} \Psi = {}^\mathrm{T}\!\left[ \dots ,\begin{bmatrix} \Psi ^{L}(-1)\\ \Psi ^{R}(-1)\end{bmatrix},\begin{bmatrix} \Psi ^{L}(0)\\ \Psi ^{R}(0)\end{bmatrix},\begin{bmatrix} \Psi ^{L}(1)\\ \Psi ^{R}(1)\end{bmatrix},\dots \right] \in (\mathbb {H}^{2})^{\mathbb {Z}}, \end{aligned}$$

then

$$\begin{aligned} \phi (\Psi ) = {}^\mathrm{T}\! \left[ \ldots , |\Psi ^{L}(-1)|^2 + |\Psi ^{R}(-1)|^2, |\Psi ^{L}(0)|^2 + |\Psi ^{R}(0)|^2, |\Psi ^{L}(1)|^2 + |\Psi ^{R}(1)|^2, \ldots \right] \in \mathbb {R}_{+}^{\mathbb {Z}}. \end{aligned}$$

That is, for any \(x \in \mathbb {Z}\),

$$\begin{aligned} \phi (\Psi ) (x) = |\Psi ^{L}(x)|^2 + |\Psi ^{R}(x)|^2. \end{aligned}$$

Sometimes, we identify \(\phi (\Psi (x))\) with \(\phi (\Psi ) (x)\). Moreover, we define the measure of the QQW at position \(x\) by

$$\begin{aligned} \mu (x)=\phi (\Psi (x)) \quad (x \in \mathbb {Z}). \end{aligned}$$

Now, we are ready to introduce the set of stationary measures:

$$\begin{aligned} \mathcal{M}_{s}&= \mathcal{M}_s (U) \\&= \left\{ \mu \in \mathbb {R}_{+}^{\mathbb {Z}} \setminus \{ \varvec{0} \} : \hbox {there exists} \; \Psi _{0} \; \hbox {such that} \; \phi ((U^{(s)})^{n}\Psi _{0})=\mu \; (n \ge 0) \right\} , \end{aligned}$$

where \(\varvec{0}\) is the zero vector. We call the element of \(\mathcal{M}_{s}\) the stationary measure of the QQW.

Next, we consider the right (not left) eigenvalue problem of the QQW:

$$\begin{aligned} U^{(s)} \Psi = \Psi \lambda \quad (\lambda \in \mathbb {H}). \end{aligned}$$
(2.3)

Since the quaternions do not commute, it is necessary to treat \(U^{(s)} \Psi = \lambda \Psi \) and \(U^{(s)} \Psi = \Psi \lambda \) separately. Concerning left and right eigenvalues for the quaternionic matrix, and their properties, see Huang and So [11]. From Eq. (2.3), we have

$$\begin{aligned} \left( U^{(s)} \right) ^2 \Psi = U^{(s)} \left( U^{(s)} \Psi \right) = \left( U^{(s)} \Psi \right) \lambda = \Psi \lambda ^2. \end{aligned}$$

In general, we see

$$\begin{aligned} \left( U^{(s)} \right) ^n \Psi = \Psi \lambda ^n \quad (n \ge 1). \end{aligned}$$

We should remark that \(|\lambda |=1\), since \(U^{(s)}\) is unitary. We sometimes write \(\Psi =\Psi ^{(\lambda )}\) to emphasize the dependence on eigenvalue \(\lambda \). Then, we have \(\phi (\Psi ^{(\lambda )}) \in \mathcal{M}_s\).

We see that Eq. (2.3) is equivalent to

$$\begin{aligned} \Psi ^{L}(x) \lambda&= a \Psi ^{L}(x+1) + b \Psi ^{R} (x+1), \end{aligned}$$
(2.4)
$$\begin{aligned} \Psi ^{R}(x) \lambda&= c \Psi ^{L}(x-1) + d \Psi ^{R}(x-1), \end{aligned}$$
(2.5)

for any \(x \in \mathbb {Z}\).

Put

$$\begin{aligned} \varphi = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \in \mathbb {H}^2, \end{aligned}$$

with \(\alpha , \beta \in \mathbb {H}\) and \(|\alpha |^2+|\beta |^2=1\). Let \(\Psi _0 ^{\varphi }\) be the initial state for the QQW starting from \(\varphi \) at the origin:

$$\begin{aligned} \Psi _0 ^{\varphi }&={}^\mathrm{T} \left[ \ldots , \begin{bmatrix} \Psi ^{L} (-2) \\ \Psi ^{R} (-2) \end{bmatrix}, \begin{bmatrix} \Psi ^{L} (-1) \\ \Psi ^{R} (-1) \end{bmatrix}, \begin{bmatrix} \Psi ^{L} (0) \\ \Psi ^{R} (0) \end{bmatrix}, \begin{bmatrix} \Psi ^{L} (1) \\ \Psi ^{R} (1) \end{bmatrix}, \begin{bmatrix} \Psi ^{L} (2) \\ \Psi ^{R} (2) \end{bmatrix}, \ldots \right] , \\&= {}^\mathrm{T} \left[ \ldots , \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \varphi , \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \ldots \right] . \end{aligned}$$

The probability that quaternionic quantum walker at time \(n\), \(X_n= X_n ^{\varphi }\), with the initial \(\Psi _0 ^{\varphi }\) exists at location \(x \in \mathbb {Z}\) is defined by

$$\begin{aligned} P \left( X_n = x \right) = P \left( X_n ^{\varphi } = x \right) = \phi \left( \left( U^{(s)} \right) ^n \Psi _0 ^{\varphi } \right) (x). \end{aligned}$$

To compute \(P \left( X_n = x \right) \), we consider the following quantity. For fixed \(l\) and \(m\) with \(l+m=n\) and \(-l+m=x\), we define

$$\begin{aligned} \Xi _n (l,m)= \sum _{l_j,m_j} P^{l_n}Q^{m_n}P^{l_{n-1}}Q^{m_{n-1}} \dots P^{l_2}Q^{m_2} P^{l_1}Q^{m_1} \end{aligned}$$

summed over all \(l_j, \> m_j \in \{0, 1\}\) satisfying \(l_1+ \dots +l_n=l, \> m_1+ \dots + m_n = m,\) and \(l_j + m_j =1 \> (1 \le j \le n)\). For example, \(\Xi _3 (2,1) = P^2Q +PQP+ QP^2\). We put

$$\begin{aligned} \varphi = \begin{bmatrix} \Psi ^{L} (0) \\ \Psi ^{R} (0) \end{bmatrix} = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \in \mathbb {H}^2. \end{aligned}$$

By definition, we see that

$$\begin{aligned} \Psi _n (x) = \Xi _n (l,m) \varphi , \end{aligned}$$

since \(\Psi _n (x)\) is a two component vector of the quaternionic quantum walker being at position \(x\) at time \(n\) for initial state \(\varphi \) at the origin, and \(\Xi _n (l,m)\) is the sum of all possible paths in the trajectory consisting of \(l\) steps left and \(m\) steps right with \(l=(n-x)/2\) and \(m=(n+x)/2\).

From now on, we consider \(\Xi _n (l, m)\) for \(n=3,4\) in the following. When \(n=3\) case, we get

$$\begin{aligned} \Xi _3 (3, 0)&= \begin{bmatrix} a^3&\quad a^2b \\ 0&\quad 0 \end{bmatrix}, \quad \Xi _3 (2, 1) = \begin{bmatrix} abc+bca&\quad abd+bcb \\ ca^2&\quad cab \end{bmatrix}, \\ \Xi _3 (1, 2)&= \begin{bmatrix} bdc&\quad bd^2 \\ cbc+dca&\quad cbd+dcb \end{bmatrix}, \quad \Xi _3 (0, 3) = \begin{bmatrix} 0&\quad 0 \\ d^2c&\quad d^3 \end{bmatrix}. \end{aligned}$$

When \(n=4\) case, we obtain

$$\begin{aligned} \Xi _4 (4, 0)&= \begin{bmatrix} a^4&\quad a^3b \\ 0&\quad 0 \end{bmatrix}, \quad \Xi _4 (3, 1) = \left[ \begin{array}{lll} abca+bca^2+a^2bc &{}\quad a^2bd+abcb+bcab \\ ca^3 &{}\quad ca^2b \end{array}\right] , \\ \Xi _4 (2, 2)&= \begin{bmatrix} bdca+abdc+bcbc&abd^2+bdcb+bcbd \\ cbca+dca^2+cabc&dcab+cbcb+cabd \end{bmatrix}, \\ \Xi _4 (1, 3)&= \left[ \begin{array}{lll} bd^2c &{}\quad bd^3 \\ dcbc+d^2ca+cbdc &{}\quad dcbd+d^2cb+cbd^2 \end{array}\right] , \quad \Xi _4 (0, 4) = \begin{bmatrix} 0&\quad 0 \\ d^3c&\quad d^4 \end{bmatrix}. \end{aligned}$$

As an example, we deal with the following QQW defined by

$$\begin{aligned} U = \frac{1}{\sqrt{2}} \begin{bmatrix} 1&\quad i \\ j&\quad k \end{bmatrix}. \end{aligned}$$

When \(n=3\),

$$\begin{aligned} \Xi _3 (3, 0)&= \frac{1}{2\sqrt{2}} \begin{bmatrix} 1&\quad i \\ 0&\quad 0 \end{bmatrix}, \quad \Xi _3 (2, 1) = \frac{1}{2\sqrt{2}} \begin{bmatrix} 2k&\quad 0 \\ j&\quad -k \end{bmatrix}, \\ \Xi _3 (1,2)&= \frac{1}{2\sqrt{2}} \begin{bmatrix} 1&\quad -i \\ 0&\quad 2 \end{bmatrix}, \quad \Xi _3 (0,3) = \frac{1}{2\sqrt{2}} \begin{bmatrix} 0&\quad 0 \\ -j&\quad -k \end{bmatrix}. \end{aligned}$$

When \(n=4\),

$$\begin{aligned} \Xi _4 (4, 0)&= \frac{1}{4} \begin{bmatrix} 1&\quad i \\ 0&\quad 0 \end{bmatrix}, \quad \Xi _4 (3, 1) = \frac{1}{4} \begin{bmatrix} 3k&\quad j \\ j&\quad -k \end{bmatrix}, \quad \Xi _4 (2, 2) = \frac{1}{4} \begin{bmatrix} 1&\quad i \\ i&\quad 1 \end{bmatrix}, \\ \Xi _4 (1, 3)&= \frac{1}{4} \begin{bmatrix} -k&\quad j \\ j&\quad 3k \end{bmatrix}, \quad \Xi _4 (0, 4) = \frac{1}{4} \begin{bmatrix} 0&\quad 0 \\ i&\quad 1 \end{bmatrix}. \end{aligned}$$

If we take an initial state at the origin \(\varphi = {}^\mathrm{T}[1/\sqrt{2}, j/\sqrt{2}]\), then we have

$$\begin{aligned} \Xi _3 (3, 0) \varphi&= \frac{1}{4} \begin{bmatrix} 1+k \\ 0 \end{bmatrix}, \quad \Xi _3 (2, 1) \varphi = \frac{1}{4} \begin{bmatrix} 2k \\ i+j \end{bmatrix}, \\ \Xi _3 (1,2) \varphi&= \frac{1}{4} \begin{bmatrix} 1-k \\ 2j \end{bmatrix}, \quad \Xi _3 (0,3) \varphi = \frac{1}{4} \begin{bmatrix} 0 \\ i-j \end{bmatrix}, \\ \Xi _4 (4, 0) \varphi&= \frac{1}{4 \sqrt{2}} \begin{bmatrix} 1+k \\ 0 \end{bmatrix}, \quad \Xi _4 (3, 1) \varphi = \frac{1}{4 \sqrt{2}} \begin{bmatrix} -1+3k \\ i+j \end{bmatrix}, \quad \Xi _4 (2, 2) \varphi = \frac{1}{4 \sqrt{2}} \begin{bmatrix} 1+k \\ i+j \end{bmatrix}, \\ \Xi _4 (1, 3) \varphi&= \frac{1}{4 \sqrt{2}} \begin{bmatrix} -1-k \\ -3i+j \end{bmatrix}, \quad \Xi _4 (0, 4) \varphi = \frac{1}{4 \sqrt{2}} \begin{bmatrix} 0 \\ i+j \end{bmatrix}. \end{aligned}$$

Therefore, we get

$$\begin{aligned} P(X_3 = -3)&= P(X_3 = 3) =1/8, \quad P(X_3 = -1) = P(X_3 = 1)=3/8, \\ P(X_4 = -4)&= P(X_4 = 4) =1/16, \\ P(X_4 = -2)&= P(X_4 = 2)=6/16, \quad P(X_4 = 0) = 2/16. \end{aligned}$$

In fact, it is noted that the probability distributions for \(n=0,1,2,3,4\) are the same as those of the symmetric Hadamard walk with initial state at the origin, e.g., \(\varphi = {}^\mathrm{T}[1/\sqrt{2}, i/\sqrt{2}]\).

From now on, we treat general \(\Xi _n (l,m)\). For example,

$$\begin{aligned} \Xi _4 (3,1) = QP^3 + PQP^2 + P^2QP + P^3Q. \end{aligned}$$

Here, we find a nice relation: \(P^2 = aP.\) We introduce the following \(2 \times 2\) matrices, \(R\) and \(S\):

$$\begin{aligned} R= \left[ \begin{array}{cc} c &{}\quad d \\ 0 &{}\quad 0 \end{array} \right] , \quad S= \left[ \begin{array}{cc} 0 &{}\quad 0 \\ a &{}\quad b \end{array} \right] . \end{aligned}$$

Then, we obtain the next table of products of matrices, \(P, \> Q, \> R,\) and \(S\):

where \(PQ=bR\), for example. Using this table, we have

$$\begin{aligned} \overbrace{PP \dots P}^{w_1} \overbrace{QQ \dots Q}^{w_2} \overbrace{PP \dots P}^{w_3}&\dots \overbrace{QQ \dots Q}^{w_{2 \gamma }} \overbrace{PP \dots P}^{w_{2 \gamma +1}}\\&= a^{w_1-1} b d^{w_2-1} c a^{w_3-1} b \dots d^{w_{2 \gamma }-1} c a^{w_{2 \gamma +1}-1} P, \\ \overbrace{QQ \dots Q}^{w_1} \overbrace{PP \dots P}^{w_2} \overbrace{QQ \dots Q}^{w_3}&\dots \overbrace{PP \dots P}^{w_{2 \gamma }} \overbrace{QQ \dots Q}^{w_{2 \gamma +1}} \\&= d^{w_1-1} c a^{w_2-1} b d^{w_3-1} c \dots a^{w_{2 \gamma }-1} b d^{w_{2 \gamma +1}-1} Q, \end{aligned}$$

where \(w_1, w_2, \ldots , w_{2 \gamma +1} \ge 1\) and \(\gamma \ge 1\).

We should remark that \(P, \> Q, \> R,\) and \(S\) form an orthogonal basis of the vector space of \(2 \times 2\) quaternionic matrices with respect to the trace inner product \(\langle A | B \rangle = \) tr\((A^{*}B)\). So, \(\Xi _n (l,m)\) has the following form:

$$\begin{aligned} \Xi _n (l,m) = p_n (l,m) P + q_n (l,m) Q + r_n (l,m) R + s_n (l,m) S. \end{aligned}$$

Next problem is to obtain explicit forms of \(p_n (l,m), q_n (l,m), r_n (l,m)\), and \(s_n (l,m)\). In the case of \(n=l+m=4\) with \(l=3,\> m=1\), we have \(\Xi _4 (3,1) = (abc+bca) P + a^2b R + c a^2 S\). So, this case is

$$\begin{aligned} p_4 (3,1)=abc+bca, \quad q_4 (3,1)=0, \quad r_4 (3,1)=a^2b, \quad s_4 (3,1)=ca^2. \end{aligned}$$

If \(a,b,c,d \in \mathbb {C}\) (QW case), then we have

$$\begin{aligned} p_4 (3,1)=2abc, \quad q_4 (3,1)=0, \quad r_4 (3,1)=a^2b, \quad s_4 (3,1)=a^2c. \end{aligned}$$

However, it would be hard to obtain an explicit form \(\Xi _n (l,m)\) like that of QW case (see Lemma 1 in [12], or Lemma 2 on [13], for example).

3 Results

In this section, we present our results on QQWs. Let

$$\begin{aligned} \Psi (X) = \left\{ \varphi = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \in X^2 : |\alpha |^2 + |\beta |^2 =1 \right\} , \end{aligned}$$

for \(X = \mathbb {R}, \mathbb {C}, \mathbb {H}\). Here, we introduce the following set of measures. For any fixed \(U \in U (2,X)\) with \(X = \mathbb {R}, \mathbb {C}, \mathbb {H}\),

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,Y))&= \left\{ \phi \left( (U^{(s)})^{n} \Psi _{0} ^{\varphi } \right) : \varphi \in \Psi (Y) \right\} \quad (Y = \mathbb {R}, \mathbb {C}, \mathbb {H}),\\ \mathcal{M}^\mathrm{glob}_n (U;(\Psi _0,Y))&= \left\{ \phi \left( (U^{(s)})^{n} \Psi _{0} \right) : \Psi _{0} \in Y^{\mathbb {Z}} \setminus \{ \varvec{0} \} \right\} \quad (Y = \mathbb {R}, \mathbb {C}, \mathbb {H}), \end{aligned}$$

where \(n=0,1,2, \ldots \). By definition, for any \(U \in U (2,X)\) with \(X = \mathbb {R}, \mathbb {C}, \mathbb {H}\), we have

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_0 (U;(\varphi ,Y)) = \left\{ \delta _0 \right\} , \end{aligned}$$

where \(Y = \mathbb {R}, \mathbb {C}, \mathbb {H}\). Here, \(\delta _x\) is the delta measure at position \(x \in \mathbb {Z}\). It is trivial that for any fixed \(U \in U (2,X)\) with \(X = \mathbb {R}, \mathbb {C}, \mathbb {H}\),

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {R})) \subseteq \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {C})) \subseteq \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {H})) \quad (n \ge 0), \end{aligned}$$

and

$$\begin{aligned} \mathcal{M}^\mathrm{glob}_n (U;(\Psi _{0},\mathbb {R})) \subseteq \mathcal{M}^\mathrm{glob}_n (U;(\Psi _{0},\mathbb {C})) \subseteq \mathcal{M}^\mathrm{glob}_n (U;(\Psi _{0},\mathbb {H})) \quad (n \ge 0). \end{aligned}$$

In this setting, we obtain

Theorem 3.1

For any \(U \in U (2,\mathbb {R})\),

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {C})) = \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {H})) \quad (n \ge 0). \end{aligned}$$

The typical example is the Hadamard walk given by \(U=H \in U (2,\mathbb {R})\). The proof will appear in Sect. 4.

Next, we consider a relation between \(\mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {R}))\) and \(\mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {C}))\) for the Hadamard walk, that is, \(U=H\). In this case, we have

$$\begin{aligned} P \left( X_1 = -1 \right) = \frac{1}{2} | \alpha + \beta |^2, \quad P \left( X_1 = 1 \right) = \frac{1}{2} | \alpha - \beta |^2. \end{aligned}$$

Then, we see that \(P (X_1 = -1) = P (X_1 = 1)\) if and only if \(\mathfrak {R}(\alpha \overline{\beta }) = 0\), where \(\mathfrak {R}(x)\) is the real part of \(x \in \mathbb {H}\). In a similar fashion, we obtain

$$\begin{aligned} P \left( X_2 = -2 \right) = \frac{1}{4} | \alpha + \beta |^2, \quad P \left( X_2 = 2 \right) = \frac{1}{4} | \alpha - \beta |^2. \end{aligned}$$

Then, we see that \(P (X_2 = -2) = P (X_2 = 2)\) if and only if \(\mathfrak {R}(\alpha \overline{\beta }) = 0\). Here, we introduce the set of symmetric measures:

$$\begin{aligned} \mathcal{M}_\mathrm{sym} = \left\{ \mu \in \mathbb {R}_{+}^{\mathbb {Z}} \setminus \{ \varvec{0} \} : \mu (x) = \mu (-x) \>\> (x \in \mathbb {Z}) \right\} . \end{aligned}$$

Therefore, we have the following result:

Proposition 3.2

For any \(Y = \mathbb {R}, \mathbb {C}, \mathbb {H}\), we obtain

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_1 (H;(\varphi ,Y)) \cap \mathcal{M}_\mathrm{sym}&= \left\{ \frac{1}{2} \delta _{-1} + \frac{1}{2} \delta _{1} \right\} ,\\ \mathcal{M}^\mathrm{loc}_2 (H;(\varphi ,Y)) \cap \mathcal{M}_\mathrm{sym}&= \left\{ \frac{1}{4} \delta _{-2} + \frac{2}{4} \delta _{0} + \frac{1}{4} \delta _{2} \right\} . \end{aligned}$$

Furthermore, we have

$$\begin{aligned} P \left( X_3 = -3 \right)&= \frac{1}{8} |\alpha + \beta |^2, \quad P \left( X_3 = 3 \right) = \frac{1}{8} |\alpha - \beta |^2, \end{aligned}$$
(3.1)
$$\begin{aligned} P \left( X_3 = -1 \right)&= \frac{1}{8} \left\{ 4 |\alpha |^2 + |\alpha + \beta |^2 \right\} , \quad P \left( X_3 = 1 \right) = \frac{1}{8} \left\{ 4|\beta |^2 + |\alpha - \beta |^2 \right\} . \end{aligned}$$
(3.2)

Then, we see that “\(P (X_3 = -3) = P (X_3 = 3)\) and \(P(X_3 = -1) = P(X_3 = 1)\)” if and only if “\(\mathfrak {R}(\alpha \overline{\beta }) = 0\) and \(|\alpha |=|\beta |=1/\sqrt{2}\)”. Therefore, we have

Proposition 3.3

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_3 (H;(\varphi ,\mathbb {R})) \cap \mathcal{M}_\mathrm{sym}&= \emptyset ,\\ \mathcal{M}^\mathrm{loc}_3 (H;(\varphi ,Y)) \cap \mathcal{M}_\mathrm{sym}&= \left\{ \frac{1}{8} \delta _{-3} + \frac{3}{8} \delta _{-1} + \frac{3}{8} \delta _{1} + \frac{1}{8} \delta _{3} \right\} , \end{aligned}$$

for \(Y = \mathbb {C}, \mathbb {H}\).

Thus, we see that

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_3 (H;(\varphi ,\mathbb {R})) \subset \mathcal{M}^\mathrm{loc}_3 (H;(\varphi ,\mathbb {C})). \end{aligned}$$

So, compared with Theorem 3.1, the following does not hold; for any \(U \in U (2,\mathbb {R})\),

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {R})) = \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {C})) \quad (n \ge 0). \end{aligned}$$

As in the similar way of our previous paper [14], we obtain the following results; Theorems 3.4, 3.5, and 3.6. Remark that for any \(U \in U (2,\mathbb {H})\), the unitarity of \(U\) implies that it is enough to consider three cases: \(abcd \not =0, \> a=0,\) and \(b=0\). For any \(c>0\), \(\mu _{u}^{(c)}\) denotes the uniform measure with parameter \(c\), i.e.,

$$\begin{aligned} \mu _{u}^{(c)} (x) = c \qquad (x \in \mathbb {Z}). \end{aligned}$$

Let \(\mathcal{M}_\mathrm{unif} = \{ \mu _{u}^{(c)} : c>0 \}\) be the set of uniform measures on \(\mathbb {Z}\).

Theorem 3.4

For any \(U \in U (2, \mathbb {H})\), we have

$$\begin{aligned} \mathcal{M}_\mathrm{unif} \subseteq \mathcal{M}_s. \end{aligned}$$
(3.3)

Let \(\mathcal{M}_\mathrm{exp}\) be the set of the measures \(\mu \) having exponential decay with respect to the position, i.e., \(\mu \) satisfies that there exist positive constants \(C_+, C_0, C_-\), and \(\gamma \in (0,1)\) such that

$$\begin{aligned} \mu (x) = \left\{ \begin{array}{cc} C_+ \gamma ^{-|x|} &{} (x \ge 1), \\ C_0 &{} (x =0), \\ C_- \gamma ^{-|x|} &{} (x \le -1). \end{array} \right. \end{aligned}$$

Furthermore, we obtain the following result for \(a=0\) case.

Theorem 3.5

For any \(U \in U (2, \mathbb {H})\) with \(a=0\), we see

$$\begin{aligned} \mathcal{M}_s \setminus \left( \mathcal{M}_\mathrm{unif} \cup \mathcal{M}_\mathrm{exp} \right) \not = \emptyset . \end{aligned}$$

The proof will be given in Sect. 6. For \(b=0\) case, we show

Theorem 3.6

For any \(U \in U (2, \mathbb {H})\) with \(b=0\), we see

$$\begin{aligned} \mathcal{M}_s = \mathcal{M}_\mathrm{unif}. \end{aligned}$$

Concerning the proof, see Sect. 7. For the rest (\(abcd \not = 0\) case), we do not have any corresponding interesting results on QQWs at the present stage.

4 Proof of Theorem 3.1

By definition, it is obvious that

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {C})) \subset \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {H})) \quad (n \ge 0). \end{aligned}$$

Thus, it is enough to show

$$\begin{aligned} \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {H})) \subset \mathcal{M}^\mathrm{loc}_n (U;(\varphi ,\mathbb {C})) \quad (n \ge 0). \end{aligned}$$

That is, for any \(\varphi = {}^\mathrm{T} [\alpha , \beta ] \in \mathbb {H}^2\) with \(|\alpha |^2+|\beta |^2=1\), there exists \(\widetilde{\varphi } = {}^\mathrm{T} [\widetilde{\alpha }, \widetilde{\beta }] \in \mathbb {C}^2\) with \(|\widetilde{\alpha }|^2+|\widetilde{\beta }|^2=1\) such that

$$\begin{aligned} P \left( X_n ^{\varphi } = x \right) = P \left( X_n ^{\widetilde{\varphi }} = x \right) \end{aligned}$$
(4.1)

for any \(n=0,1,2, \ldots \) and \(x \in \mathbb {Z}.\)

First, we see that \(\alpha , \beta \in \mathbb {H}\) with \(|\alpha |^2+|\beta |^2=1\) can be written as

$$\begin{aligned} \alpha&= \left\{ \cos \theta _{\alpha } + \left( \alpha _x i + \alpha _y j + \alpha _z k \right) \sin \theta _{\alpha } \right\} \cos \xi , \end{aligned}$$
(4.2)
$$\begin{aligned} \beta&= \left\{ \cos \theta _{\beta } + \left( \beta _x i + \beta _y j + \beta _z k \right) \sin \theta _{\beta } \right\} \sin \xi , \end{aligned}$$
(4.3)

where \(\theta _{\alpha }, \> \theta _{\beta } \in [0, 2\pi ), \> \xi \in [0, \pi /2], \> \alpha _x, \alpha _y, \alpha _z, \beta _x, \beta _y, \beta _z \in \mathbb {R}\) with

$$\begin{aligned} \alpha _x ^2 + \alpha _y ^2 + \alpha _z ^2 = \beta _x ^2 + \beta _y ^2 + \beta _z ^2 = 1. \end{aligned}$$

From \(U \in U (2,\mathbb {R})\) and Lemma 1 in [12] (or Lemma 2 in [13]), we have

$$\begin{aligned} \Xi _n (l,m) = \begin{bmatrix} r_{11}&\quad r_{12} \\ r_{21}&\quad r_{22} \end{bmatrix}, \end{aligned}$$

where \(r_{ab} \in \mathbb {R}\> (a,b \in \{1,2\})\). Then, we obtain

$$\begin{aligned} P \left( X_n ^{\varphi } = x \right) = || \Xi _n (l,m) \varphi ||^2 = A |\alpha |^2 + B |\beta |^2 + C \mathfrak {R}\left( \alpha \overline{\beta } \right) , \end{aligned}$$
(4.4)

where \(n=l+m, \> x=-l+m\) and

$$\begin{aligned} A = r_{11}^2 + r_{21}^2, \quad B = r_{12}^2 + r_{22}^2, \quad C = 2 \left( r_{11} r_{12} + r_{21} r_{22} \right) . \end{aligned}$$

Combining Eqs. (4.2) and (4.3) with Eq. (4.4) implies

$$\begin{aligned} P \left( X_n ^{\varphi } = x \right) = A \cos ^2 \xi + B \sin ^2 \xi + C \left( \cos \theta _{\alpha } \cos \theta _{\beta } + \gamma \sin \theta _{\alpha } \sin \theta _{\beta } \right) \cos \xi \sin \xi , \end{aligned}$$
(4.5)

where \(\gamma = \alpha _x \beta _x + \alpha _y \beta _y + \alpha _z \beta _z.\) If we take \(\widetilde{\alpha }, \widetilde{\beta } \in \mathbb {C}\) with

$$\begin{aligned} \widetilde{\alpha } = \left( \cos \widetilde{\theta }_{\alpha } + i \sin \widetilde{\theta }_{\alpha } \right) \cos \widetilde{\xi }, \quad \widetilde{\beta } = \left( \cos \widetilde{\theta }_{\beta } + i \sin \widetilde{\theta }_{\beta } \right) \sin \widetilde{\xi } \end{aligned}$$

where \(\widetilde{\theta }_{\alpha }, \> \widetilde{\theta }_{\beta } \in [0, 2\pi ), \> \widetilde{\xi } \in [0, \pi /2]\), then we have

$$\begin{aligned} P \left( X_n ^{\widetilde{\varphi }} = x \right) = A \cos ^2 \widetilde{\xi } + B \sin ^2 \widetilde{\xi } + C \cos \left( \widetilde{\theta }_{\alpha } - \widetilde{\theta }_{\beta } \right) \cos \widetilde{\xi } \sin \widetilde{\xi }. \end{aligned}$$
(4.6)

We should remark that \(|\gamma | \le 1\). So, we see

$$\begin{aligned} \left| \cos \theta _{\alpha } \cos \theta _{\beta } + \gamma \sin \theta _{\alpha } \sin \theta _{\beta } \right| \le 1. \end{aligned}$$

Therefore, if we choose \(\widetilde{\xi } = \xi \) and \(\widetilde{\theta }_{\alpha }, \> \widetilde{\theta }_{\beta }\) satisfying

$$\begin{aligned} \cos \left( \widetilde{\theta }_{\alpha } - \widetilde{\theta }_{\beta } \right) = \cos \theta _{\alpha } \cos \theta _{\beta } + \gamma \sin \theta _{\alpha } \sin \theta _{\beta }, \end{aligned}$$

then Eqs. (4.5) and (4.6) give

$$\begin{aligned} P \left( X_n ^{\varphi } = x \right) = P \left( X_n ^{\widetilde{\varphi }} = x \right) \end{aligned}$$

for any \(n=0,1,2, \ldots \) and \(x \in \mathbb {Z}\). Thus, the proof is completed.

5 Proof of Theorem 3.4

This section gives a proof of Theorem 3.4, i.e., \(\mathcal{M}_\mathrm{unif} \subseteq \mathcal{M}_s (U)\) for any \(U \in U (2, \mathbb {H}).\) This proof is the same as that in [14]. So, we omit the details. First, we consider the following initial state: for any \(x \in \mathbb {Z}\),

$$\begin{aligned} \Psi _{0} (x) = \varphi = \begin{bmatrix} \alpha \\ \beta \end{bmatrix}\in \mathbb {H}^2, \end{aligned}$$

where \(||\varphi ||^2 = |\alpha |^2+|\beta |^2>0\). Remark that \(\Psi _{0} (x)\) does not depend on the position \(x\). Then, we have

$$\begin{aligned} \Psi _{1} (x) = P \Psi _{0} (x+1) + Q \Psi _{0}(x-1) = (P+Q) \varphi = U \varphi . \end{aligned}$$

In a similar fashion, we get \(\Psi _{n} (x) = U^n \varphi \) for any \(n =0,1,2, \ldots \) and \(x \in \mathbb {Z}\). Thus, we have

$$\begin{aligned} \mu _n (x) = || \Psi _{n} (x) ||^2 =|| U^n \varphi ||^2 = ||\varphi ||^2 (= |\alpha |^2 + |\beta |^2), \end{aligned}$$

since \(U\) is unitary. That is, this measure \(\mu _0\) satisfies \(\mu _0 = \mu _{u}^{(c)}\) with \(c=||\varphi ||^2\) and \(\mu _n (x)= \mu _0 (x) \> (n \ge 1, \> x \in \mathbb {Z}).\) Therefore, the proof is completed.

It is noted that we can easily generalize Theorem 3.4 for an \(N\)-state QQW on \(\mathbb {Z}\) determined by the \(N \times N\) unitary matrix, \(U \in U (N, \mathbb {H})\) like QW case (see [14]).

6 Case \(a=0\)

This section deals with right eigenvalue problem and stationary measures on QQWs for \(a=0\). In this case, \(U\) can be expressed as

$$\begin{aligned} U= \begin{bmatrix} 0&\quad b \\ c&\quad 0 \end{bmatrix}, \end{aligned}$$

where \(b, c \in \mathbb {H}\) and \(|b|=|c|=1\).

First we consider

$$\begin{aligned} U= \begin{bmatrix} 0&\quad 1 \\ 1&\quad 0 \end{bmatrix}. \end{aligned}$$

From Eqs. (2.4) and (2.5), we see that for any \(x \in \mathbb {Z}\),

$$\begin{aligned} \Psi ^L(x) (\lambda ^2 - 1) = 0. \end{aligned}$$

From Proposition 1.1, we get two eigenvalues \(\lambda _{\pm } = \pm 1\). As an initial state, we consider \(\Psi ^{(\pm )}\) corresponding to \(\lambda _{\pm }\) as follows:

$$\begin{aligned} \Psi ^{(\pm )} = {}^\mathrm{T} \left[ \ldots , \begin{bmatrix} \Psi ^{(\pm ,L)} (-2) \\ \Psi ^{(\pm ,R)} (-2) \end{bmatrix}, \begin{bmatrix} \Psi ^{(\pm ,L)} (-1) \\ \Psi ^{(\pm ,R)} (-1) \end{bmatrix}, \begin{bmatrix} \Psi ^{(\pm ,L)} (0) \\ \Psi ^{(\pm ,R)} (0) \end{bmatrix}, \begin{bmatrix} \Psi ^{(\pm ,L)} (1) \\ \Psi ^{(\pm ,R)} (1) \end{bmatrix}, \begin{bmatrix} \Psi ^{(\pm ,L)} (2) \\ \Psi ^{(\pm ,R)} (2) \end{bmatrix}, \ldots \right] . \end{aligned}$$
(6.1)

Here, for any \(x \in \mathbb {Z}\),

$$\begin{aligned}&\Psi ^{(\pm ,L)} (2x) = \alpha _{2x}, \>\> \quad \Psi ^{(\pm ,R)} (2x) = \beta _{2x}, \nonumber \\&\Psi ^{(\pm ,L)} (2x-1) = \beta _{2x} \lambda _{\pm }, \>\>\quad \Psi ^{(\pm ,R)} (2x+1) = \alpha _{2x} \lambda _{\pm } , \end{aligned}$$
(6.2)

where \(\alpha _{2x}, \> \beta _{2x} \in \mathbb {H}\) with \(\alpha _{2x} \beta _{2x} \not = 0\). In fact, we have \(U^{(s)} \Psi ^{(\pm )} = \Psi ^{(\pm )} \lambda _{\pm }.\) Therefore,

$$\begin{aligned} (U^{(s)})^n \Psi ^{(\pm )} = \Psi ^{(\pm )} \lambda _{\pm }^n. \end{aligned}$$
(6.3)

Let \(\mu _n ^{(\Psi ^{(\pm )})} = \phi ((U^{(s)})^n \Psi ^{(\pm )})\) and

$$\begin{aligned} \mu _n ^{(\Psi ^{(\pm )})} = {}^\mathrm{T} \left[ \ldots , \mu _n^{(\Psi ^{(\pm )})} (-2), \mu _n^{(\Psi ^{(\pm )})} (-1), \mu _n^{(\Psi ^{(\pm )})} (0), \mu _n^{(\Psi ^{(\pm )})} (1), \mu _n^{(\Psi ^{(\pm )})} (2), \ldots \right] . \end{aligned}$$

From Eqs. (6.1), (6.2), and (6.3), we obtain

$$\begin{aligned} \mu _n ^{(\Psi ^{(\pm )})} = {}^\mathrm{T} \left[ \ldots , |\alpha _{-2}|^2+|\beta _{-2}|^2, |\alpha _{-2}|^2+|\beta _0|^2, |\alpha _0|^2+|\beta _0|^2, |\alpha _0|^2+|\beta _2|^2, |\alpha _2|^2+|\beta _2|^2, \ldots \right] . \end{aligned}$$

Therefore, we see that for any \(n \ge 0\), \(\mu _n ^{(\Psi ^{(\pm )})} = \mu _0 ^{(\Psi ^{(\pm )})}\). So, \(\mu _0 ^{(\Psi ^{(\pm )})}\) becomes a stationary measure, that is, \(\mu _0 ^{(\Psi ^{(\pm )})} \in \mathcal{M}_s (U)\). Moreover, \(\mu _n ^{(\Psi ^{(\pm )})} (2x) = |\alpha _{2x}|^2+|\beta _{2x}|^2 \> (x \in \mathbb {Z})\). So, in general, stationary measure \(\mu _0^{(\Psi ^{(\pm )})}\) is a non-uniform and non-exponential decay measure. Therefore, we obtain

$$\begin{aligned} \mathcal{M}_s (U) \setminus \left( \mathcal{M}_\mathrm{unif} \cup \mathcal{M}_\mathrm{exp} \right) \not = \emptyset . \end{aligned}$$

Next, we consider the following case:

$$\begin{aligned} U= \begin{bmatrix} 0&\quad 1 \\ -1&\quad 0 \end{bmatrix}. \end{aligned}$$

By Eqs. (2.4) and (2.5), we see that for any \(x \in \mathbb {Z}\),

$$\begin{aligned} \Psi ^L(x) (\lambda ^2 + 1) = 0. \end{aligned}$$

By Proposition 1.1, we have infinite eigenvalues:

$$\begin{aligned} \lambda = x_1 i + x_2 j + x_3 k \quad \left( x_1^2+x_2^2+x_3^2=1\right) . \end{aligned}$$

As an initial state, we consider \(\Psi ^{(\lambda )}\) corresponding to \(\lambda \) with

$$\begin{aligned}&\Psi ^{(\lambda ,L)} (2x) = \alpha _{2x}, \>\>\quad \Psi ^{(\lambda ,R)} (2x) = \beta _{2x}, \nonumber \\&\Psi ^{(\lambda ,L)} (2x-1) = -\beta _{2x} \lambda , \>\>\quad \Psi ^{(\lambda ,R)} (2x+1) = \alpha _{2x} \lambda , \end{aligned}$$

where \(\alpha _{2x}, \> \beta _{2x} \in \mathbb {H}\) with \(\alpha _{2x} \beta _{2x} \not = 0\). As in the previous case, we obtain the same conclusion:

$$\begin{aligned} \mathcal{M}_s (U) \setminus \left( \mathcal{M}_\mathrm{unif} \cup \mathcal{M}_\mathrm{exp} \right) \not = \emptyset . \end{aligned}$$

Similarly, we can extend this result to the general case \(b, c \in \mathbb {H}\) with \(|b|=|c|=1\) and \(a=d=0\).

7 Case \(b=0\)

This section is devoted to stationary measures on QQWs for \(b=0\). For this case, we see that \(U\) can be written as

$$\begin{aligned} U= \begin{bmatrix} a&\quad 0 \\ 0&\quad d \end{bmatrix}, \end{aligned}$$

where \(a, d \in \mathbb {H}\) with \(|a|=|d|=1\). Here, we introduce the following set of measures:

$$\begin{aligned} \mathcal{M}_{n}&= \mathcal{M}_n (U)\\&= \left\{ \mu \in \mathbb {R}_{+}^{\mathbb {Z}} \setminus \{ \varvec{0} \} : \hbox {there exists} \, \Psi _{0} \, \hbox {such that} \ \hbox {for any } n \ge 0, \> \> \phi ((U^{(s)})^{k}\Psi _{0})=\mu \; (k =0,1, \ldots , n) \right\} . \end{aligned}$$

By definition, we see that

$$\begin{aligned} \mathcal{M}_{1} \supseteq \mathcal{M}_{2} \supseteq \dots \supseteq \mathcal{M}_{n} \supseteq \mathcal{M}_{n+1} \supseteq \dots , \quad \mathcal{M}_s = \bigcap _{n=1}^{\infty } \mathcal{M}_n. \end{aligned}$$

As in the case of QWs, we have the following result which is stronger than Theorem 3.6 using a similar argument given in [14]:

Theorem 7.1

For any \(U \in U (2, \mathbb {H})\) with \(b=0\), we have \(\mathcal{M}_s (U)= \mathcal{M}_\mathrm{unif} = \mathcal{M}_2 (U).\)

8 Summary

In this paper, we introduced a QQW determined by a unitary matrix whose component is quaternion and explored the relation between QWs and QQWs. Here, we consider the following sets of measures. For a fixed \(\varphi \in \Psi (X)\) with \(X = \mathbb {R}, \mathbb {C}, \mathbb {H}\),

$$\begin{aligned} \mathcal{M}^\mathrm{loc} _{n} (\varphi ;(U,Y)) = \left\{ \phi \left( (U^{(s)})^{n} \Psi _{0} ^{\varphi } \right) : U \in U (2,Y) \right\} \quad (Y = \mathbb {R}, \mathbb {C}, \mathbb {H}). \end{aligned}$$

For a fixed \(\Psi _{0} \in X^{\mathbb {Z}} \setminus \{ \varvec{0} \}\) with \(X = \mathbb {R}, \mathbb {C}, \mathbb {H}\),

$$\begin{aligned} \mathcal{M}^\mathrm{glob} _{n} (\Psi _{0};(U,Y)) = \left\{ \phi \left( (U^{(s)})^{n} \Psi _{0} \right) : U \in U (2,Y) \right\} \quad (Y = \mathbb {R}, \mathbb {C}, \mathbb {H}). \end{aligned}$$

One of the future interesting problems is to clarify the relation among above sets.