Introduction

Wavelet bases can be constructed using the notion of multiresolution analysis (MRA). In order to generate an MRA, we need to find a function vector \(\varPhi =(\phi _i)_{i=1}^n\), \(\phi _i:\mathbb {R}\rightarrow \mathbb {C}\) which generates an MRA. A function vector \(\varPhi\) generates an MRA if it is \(L^2\) stable, compactly supported and satisfies the multiscaling equation

$$\begin{aligned} {\varPhi (x)=\sqrt{2}\sum _{k=0}^l H_k\varPhi (2x-k)}, \end{aligned}$$
(1.1)

where \(l\in \mathbb {N}\) and \(H_k \in \mathbb {C}^{n\times n}\). A function vector \(\varPhi\) which satisfies the multiscaling equation (1.1) is called a multiscaling function or refinable function. To generate an MRA, one needs to find a function vector \(\varPhi\) which satisfies Eq. (1.1). To find such a solution vector, we usually switch over to the frequency domain where Eq. (1.1) becomes

$$\begin{aligned} \hat{\varPhi }(\xi )=H(\xi /2)\hat{\varPhi }(\xi /2), \end{aligned}$$
(1.2)

where

$$\begin{aligned} H(\xi )=\frac{1}{\sqrt{2}} \sum _{k=0}^{l}H_k e^{-ik\xi }, \end{aligned}$$
(1.3)

which is called a symbol function or a mask function. The existence of a solution to the multiscaling equation is determined by the nature of the corresponding symbol function. Moreover, the properties of this solution are determined by the nature of the symbol function. In fact, the symbol function \(H(\xi )\) is a matrix polynomial in complex exponential. Each matrix polynomial \(H(\xi )\) possesses a spectral pair or Jordan pair (XT), where X is a matrix containing the generalized eigenvectors of \(H(\xi )\) and T is a block diagonal matrix where each block is a Jordan matrix corresponding to the eigenvalues of \(H(\xi )\). Given the pair (XT), we can construct a matrix polynomial having (XT) as its spectral data. We have to find the properties of a spectral data so that the corresponding matrix polynomial is the symbol function of a compactly supported, symmetric multiscaling function \(\varPhi (x)\). Also, we have to find a dual multiscaling function \(\tilde{\varPhi }(x)\) so that the functions \(\varPhi (x)\) and \(\tilde{\varPhi }(x)\) form a pair of pseudo-biorthogonal multiscaling functions.

Preliminaries

Let

$$\begin{aligned} L(\lambda )=\sum _{k=0}^{l}A_k \lambda ^k, A_k\in \mathbb {C}^{n\times n}, \lambda \in \mathbb {C} \end{aligned}$$
(2.1)

be a matrix polynomial of degree l. A complex number \(\lambda _0\) is said to be an eigenvalue of \(L(\lambda )\) if Det \(L(\lambda _0)\) = 0. Then there exists a nonzero vector \(x_0\in \mathbb {C}^n\) such that \(L(\lambda _0)x_0\) = 0 and \(x_0\) is called the eigenvector of \(L(\lambda )\) corresponding to the eigenvalue \(\lambda _0\).

Definition 2.1

[1] The chain of vectors \(x_0, x_1\ldots x_{k}\in \mathbb {C}^n\), \(x_0\ne 0\), is a Jordan chain of length k+1 of the matrix polynomial \(L(\lambda )\) if

$$\begin{aligned} \sum _{p=0}^{i}\frac{L^p(\lambda _0)}{p!}x_{i-p}=0,\,\,\,i=0,1,2\ldots k, \end{aligned}$$
(2.2)

where \(L^p(\lambda _0)\) is the \(p^{\text{th}}\) derivative of \(L(\lambda )\) at \(\lambda _0\).

This is a generalization of the usual notion of a Jordan chain of a square matrix. Let \(T\in \mathbb {C}^{nl\times nl}\) and T be a block diagonal matrix where each block is a Jordan matrix corresponding to an eigenvalue of \(L(\lambda )\), also let \(X\in \mathbb {C}^{n\times nl}\) and column vectors of X are precisely the Jordan chains of \(L(\lambda )\) corresponding to the eigenvalues of \(L(\lambda )\). The Jordan chains appear in X in the order the corresponding eigenvalues appear in T. Then the pair (XT) is said to be a Jordan pair. Now we will give the definition of a decomposable pair.

Definition 2.2

[1] A pair of matrices

$$\begin{aligned} X=[X_1 \,\,X_2]\;{\rm and}\;T= \left( \begin{array}{cl} T_1 &{} 0\\ 0 &{} T_2 \end{array}\right) , \end{aligned}$$

where \(X_1\in \mathbb {C}^{n\times m}\), \(X_2\in \mathbb {C}^{n\times (nl-m)}\) and \(T_1\in \mathbb {C}^{m\times m}\), \(T_2\in \mathbb {C}^{(nl-m)\times (nl-m)}\) with \(0\le m\le nl\) is called a decomposable pair of degree l if the matrix

$$\begin{aligned} S_{l-1}=Col[X_1T_1^i \,\,X_2T_2^{l-1-i}]_{i=0}^{l-1} \end{aligned}$$

is nonsingular. A pair (XT) satisfying this property is called a decomposable pair of the regular \(n\times n\) matrix polynomial \(L(\lambda )=\sum _{i=0}^{l}A_i \lambda ^i\) if

$$\begin{aligned} \sum _{i=0}^{l}A_i X_1 T_1^i=0 ,\,\,\, \sum _{i=0}^{l}A_i X_2 T_2^{l-i}=0. \end{aligned}$$
(2.3)

Given a decomposable pair (XT), we can construct a matrix polynomial \(L(\lambda )\) having (XT) as its decomposable pair using the inverse representation theorem of matrix polynomials which is stated as follows.

Theorem 2.1

[1] Let \((X,T)=([X_1\,\,X_2],T_1 \oplus T_2)\) be a decomposable pair of degree l, and let \(S_{l-2}=Col[X_1T_1^i \,\,X_2T_2^{l-2-i}]_{i=0}^{l-2}\). Then, for every \(n \times nl\) matrix V such that the matrix \(( \begin{array}{c} S_{l-2}\\ V \end{array})\) is nonsingular, the matrix polynomial

$$\begin{aligned} L(\lambda )=V(I-P)((I\oplus T_2)\lambda -(T_1\oplus I)) (U_0+U_1\lambda +U_2\lambda ^2+\cdots +U_{l-1}\lambda ^{l-1}), \end{aligned}$$
(2.4)

where

$$\begin{aligned} P=(I\oplus T_2) [Col(X_1T_1^i \,\,X_2T_2^{l-1-i})_{i=0}^{l-1}]^{-1} \left( \begin{array}{rcl} I\\ 0 \end{array}\right) S_{l-2} \end{aligned}$$
(2.5)

and

$$\begin{aligned}{}[U_0\,\,U_1\,\,U_2\ldots U_{l-1}]=[Col(X_1T_1^i \,\,X_2T_2^{l-1-i})_{i=0}^{l-1}]^{-1} \end{aligned}$$
(2.6)

has (X,T) as its decomposable pair.

If (XT) is a Jordan pair of a matrix polynomial \(L(\lambda )\), then it is a decomposable pair of \(L(\lambda )\) [1]. We can construct a matrix polynomial for a given Jordan pair (XT) using the inverse representation theorem. A sufficient condition on a Jordan pair (XT) so that the corresponding matrix polynomial is the symbol function of a compactly supported multiscaling function has been derived by us in [2] and is as follows.

Theorem 2.2

[2] Let (X,T) = \(([X_1 \,\,X_2],T_1\oplus T_2)\) be a Jordan pair such that the \(nl\times nl\) matrix \((I\oplus T_2)-(T_1\oplus I)\) is of full rank. Then there exists a symbol function \(H(\xi )\) with Jordan pair (X,T) such that the corresponding multiscaling equation (1.1) has a solution vector \(\varPhi\) such that \(\hat{\varPhi }\) is continuous at 0 with \(\hat{\varPhi }(0)\ne 0\).

Thus, by choosing a Jordan pair (XT) such that \((I\oplus T_2)-(T_1\oplus I)\) is of full rank, we can form a multiscaling function \(\varPhi\). Now our aim is to find the additional conditions on (XT) so that this multiscaling function is symmetric also. A multiscaling function vector \(\varPhi\) is symmetric if its each component function is symmetric about some point.

Definition 2.3

[3] The refinable function vector \(\varPhi =\left( \phi _i\right) _{i=1}^{n}\) is symmetric if each component function \(\phi _i,1\le i\le n\) is symmetric about some point \(a_i\in \mathbb {R}\). That is

$$\begin{aligned} \phi _i(a_i+x)=\phi _i(a_i-x)\,\forall x \in\mathbb{R},1\le i\le n. \end{aligned}$$
(2.7)

The symmetricity of \(\varPhi\) is closely related to the properties of the associated symbol function \(H(\xi )\). A sufficient property of \(H(\xi )\) for \(\varPhi\) to be symmetric is given by the following Lemma.

Lemma 2.1

[3] If the symbol \(H(\xi )\) satisfies

$$\begin{aligned} H(\xi )=A(2\xi )H(-\xi )A(\xi )^{-1}, \end{aligned}$$
(2.8)

where

$$\begin{aligned} A(\xi )=\left( \begin{array}{ccccc} \pm e^{-2ia\xi } &{} &{} &{} &{} \\ &{} \pm e^{-2ia\xi } &{} &{} &{} \\ &{} &{} \pm e^{-2ia\xi } &{} &{} \\ &{} &{} &{}\ldots &{} \\ &{} &{} &{} &{} \pm e^{-2ia\xi } \\ \end{array} \right) , \end{aligned}$$

then \(\varPhi\) is symmetric about the point a.

In the next section, we will find the conditions on the Jordan pair (XT) so that the corresponding multiscaling function vector is symmetric based on these results.

A multiscaling function \(\varPhi (x)\) is said to be orthogonal if

$$\begin{aligned} \langle\varPhi (x-k),\varPhi (x-t)\rangle=\int \varPhi (x-k)\varPhi (x-t)^*dx=\delta _{kt}I,\,\,\,k,t \in \mathbb {Z}. \end{aligned}$$
(2.9)

In some situations, we use biorthogonal bases or pseudo biorthogonal bases instead of the orthogonal ones. Sometimes, biorthogonal bases with additional properties act more effectively than orthogonal bases. Two multiscaling functions \(\varPhi (x)\), \(\tilde{\varPhi }(x)\) are biorthogonal if

$$\begin{aligned} \langle\varPhi (x-k),\tilde{\varPhi }(x-t)\rangle=\int \varPhi (x-k)\tilde{\varPhi }(x-t)^*dx=\delta _{kt}I. \end{aligned}$$
(2.10)

We call \(\tilde{\varPhi }(x)\) the dual of \(\varPhi (x)\). Let \(H(\xi )\) and \(F(\xi )\) be the symbol functions corresponding to the multiscaling functions \(\varPhi (x)\) and \(\tilde{\varPhi }(x)\) respectively. \(\varPhi (x)\) and \(\tilde{\varPhi }(x)\) form a pair of biorthogonal bases if and only if \(H(\xi )\) and \(F(\xi )\) satisfy the perfect reconstruction formula [4],

$$\begin{aligned} H(\xi )F^*(\xi )+H(\xi +\pi )F^*(\xi +\pi )=I . \end{aligned}$$
(2.11)

It may happen that instead of the perfect reconstruction formula, \(H(\xi )\) and \(F(\xi )\) satisfy the generalized condition of perfect reconstruction [5] given below,

$$\begin{aligned} H(\xi )F^*(\xi )+H(\xi +\pi )F^*(\xi +\pi )=cI ,\,\, c\ne 1. \end{aligned}$$
(2.12)

Then \(\varPhi (x)\) and \(\tilde{\varPhi }(x)\) form a pair of pseudo-biorthogonal multiscaling functions, i.e,

$$\begin{aligned} \langle\varPhi (x-k),\tilde{\varPhi }(x-t)\rangle=\int \varPhi (x-k)\tilde{\varPhi }(x-t)^*dx=\delta _{kt}cI, \,\,c\ne 1. \end{aligned}$$
(2.13)

Taking \(z=e^{-i\xi }\), we get Eq. (2.12) as,

$$\begin{aligned} H(z)F^*(z)+H(-z)F^*(-z)=cI ,\,\, c\ne 1. \end{aligned}$$
(2.14)

If we perform one analysis step followed by one synthesis step using a biorthogonal basis, we get the initial signal exactly. In the case of pseudo biorthogonal basis, an analysis step followed by the synthesis step will produce the initial signal multiplied by c. We can recover the signal exactly by rescaling by c at each synthesis step [5]. In the case of scalar wavelets, H(z) and F(z) are polynomials in z so that \(H(1)=F(1)=1\) and \(H(-1)=F(-1)=0\) [3]. Hence the case \(c\ne 1\) is not possible in the case of scalar wavelets.

Our aim is to construct a symbol function of degree 3 by selecting a suitable Jordan pair so that the corresponding multiscaling function \(\varPhi (x)\) is symmetric, compactly supported and there exists a dual multiscaling function \(\tilde{\varPhi }(x)\) so that the pair \(\{\varPhi (x-k):k\in Z\}\) and \(\{\tilde{\varPhi }(x-k):k\in Z\}\) form a pseudo-biorthogonal pair of bases. The condition on H(z) for pseudo-biorthogonality is given by Eq. (2.14). In this article, we will formulate the condition on H(z) for the symmetricity of the corresponding multiscaling function. Then,  we will construct a compactly supported, symmetric multiscaling function \(\varPhi (x)\). Finally, we will construct the dual multiscaling function \(\tilde{\varPhi }(x)\) so that \(\varPhi (x)\) and \(\tilde{\varPhi }(x)\) form a pseudo-biorthogonal pair of multiscaling functions.

Symmetry

In this section, we will define a symmetric matrix polynomial and will show that a multiscaling function vector \(\varPhi\) is symmetric if the corresponding symbol function \(H(\xi )\) is symmetric. We will then derive the necessary as well as sufficient properties a Jordan pair (XT) must possess so that the corresponding matrix polynomial is symmetric.

Definition 3.1

A matrix polynomial

$$\begin{aligned} L(\lambda )=A_0+A_1 \lambda +A_2 \lambda ^2+\cdots +A_l \lambda ^l \end{aligned}$$

is said to be symmetric if \(A_0=A_l\), \(A_1=A_{l-1}\)..., \(A_k=A_{l-k}\).

Lemma 3.1

If the symbol function

$$\begin{aligned} H(\xi )=A_0+A_1 e^{-i\xi }+A_2 e^{-2i\xi }+\cdots +A_l e^{-il\xi } \end{aligned}$$
(3.1)

of degree l is symmetric, then the corresponding multiscaling function \(\varPhi\) is symmetric about the point \(\frac{l}{2}\).

Proof

Given that

$$\begin{aligned} H(\xi )=A_0+A_1 e^{-i\xi }+A_2 e^{-2i\xi }+\cdots +A_l e^{-il\xi } \end{aligned}$$

is symmetric, i.e. \(A_0=A_l\), \(A_1=A_{l-1}\)..., \(A_k=A_{l-k}\). Then we can see that,

$$\begin{aligned} H(\xi )&=e^{-il\xi } H(-\xi ) \nonumber \\&=e^{-2il\xi } e^{il\xi } H(-\xi )\nonumber \\&=e^{-2il\xi } I_n H(-\xi ) e^{il\xi } I_n. \end{aligned}$$
(3.2)

Taking \(a=\frac{l}{2}\), we get

$$\begin{aligned} H(\xi )&=e^{-4ia\xi } I_n H(-\xi ) e^{2ia\xi } I_n\nonumber \\ \Rightarrow H(\xi )&=A(2\xi )H(-\xi )A(\xi )^{-1}, \end{aligned}$$
(3.3)

where

$$\begin{aligned} A(\xi )=\left( \begin{array}{ccccc} e^{-2ia\xi } &{} &{} &{} &{} \\ &{} e^{-2ia\xi } &{} &{} &{} \\ &{} &{} e^{-2ia\xi } &{} &{} \\ &{} &{} &{}\ldots &{} \\ &{} &{} &{} &{} e^{-2ia\xi } \\ \end{array} \right) . \end{aligned}$$

i.e. \(H(\xi )\) satisfies Eq. (2.8). Hence, by Lemma 2.1 we can assert that the corresponding multiscaling function \(\varPhi\) is symmetric about the point \(a=\frac{l}{2}\). \(\square\)

Our aim is to find the necessary as well as sufficient conditions on a Jordan pair such that the corresponding multiscaling function \(\varPhi\) is symmetric. We have shown that the multiscaling function corresponding to a symmetric symbol function is symmetric. Since the symbol function is a matrix polynomial, our problem changes to finding the properties of Jordan pair of a symmetric matrix polynomial. A crucial necessary property of Jordan pair of a symmetric matrix polynomial is given by the following Lemma.

Lemma 3.2

If \(L(\lambda )\) is a symmetric matrix polynomial, then its Jordan pair (XT) has the property that if \(\lambda _0\ne 0\) is an eigenvalue of \(L(\lambda )\) with eigenvector \(x_0\), then \(\frac{1}{\lambda _0}\) is also an eigenvalue with the same eigenvector \(x_0\). If 0 is an eigenvalue of \(L(\lambda )\), then \(L(\lambda )\) will have an infinite eigenvalue with the eigenvector that of 0.

Proof

Given that

$$\begin{aligned} L(\lambda )=A_0+A_1 \lambda +A_2 \lambda ^2+\cdots +A_l \lambda ^l \end{aligned}$$

is a symmetric matrix polynomial. If 0 is an eigenvalue of \(L(\lambda )\), then since \(\tilde{L}(\lambda )=\lambda ^l L(\frac{1}{\lambda })=L(\lambda )\), the matrix polynomial \(\tilde{L}(\lambda )\) also has an eigenvalue 0. i.e. \(L(\lambda )\) has an eigenvalue at infinity (By definition). Now, let \(\lambda _0\ne 0\) is an eigenvalue with eigenvector \(x_0\), then we have

$$\begin{aligned} L(\lambda _0)x_0=0. \end{aligned}$$

Since \(L(\lambda )\) is a symmetric matrix polynomial, we have

$$\begin{aligned} L(\lambda _0)=\lambda _0^l L(\frac{1}{\lambda _0}). \end{aligned}$$
(3.4)

Then

$$\begin{aligned} L(\lambda _0)x_0=\lambda _0^l L(\frac{1}{\lambda _0})x_0=0. \end{aligned}$$
(3.5)

Since \(\lambda _0\ne 0\), we have

$$\begin{aligned} L(\frac{1}{\lambda _0})x_0=0, \end{aligned}$$
(3.6)

i.e. \(\frac{1}{\lambda _0}\) is also an eigenvalue with the same eigenvector \(x_0\). \(\square\)

Lemma 3.2 states that for a symmetric matrix polynomial, it is necessary that the eigenvalues occur in reciprocals. Our attempt is to construct a symmetric matrix polynomial by selecting a suitable Jordan pair (XT) with only finite eigenvalues. We will state the sufficient properties of a Jordan pair (XT) such that the corresponding matrix polynomial is symmetric.

Theorem 3.1

Let (X,T) be a Jordan pair where \(X\in n\times nl\) and T is a diagonal matrix of order nl with entries being eigenvalues, \(n\in \mathbb {N}\), \(n\ge 2\) and l is even. T has only nonzero elements neither of which equals 1. Assume that the eigenvalues in T occur in reciprocals with same eigenvectors in X. i.e. if \(\lambda _0\) is an eigenvalue in T with eigenvector \(x_0\) , then \(\frac{1}{\lambda _0}\) is also an eigenvalue in T with same eigenvector \(x_0\). Then a matrix polynomial with Jordan pair (X,T) is symmetric.

Proof

Given that (XT) is a Jordan pair where \(X\in n\times nl\) and T is a diagonal matrix such that \(T\in nl\times nl\). Let \(\lambda _i\) \((i=1,2\cdots \frac{nl}{2})\) are the eigenvalues in T. Since the eigenvalues occur in reciprocals, it follows that \(\frac{1}{\lambda _i}\) \((i=1,2\cdots \frac{nl}{2})\) are also eigenvalues. Assume that \(L(\lambda )=A_0+A_1 \lambda +A_2 \lambda ^2+\cdots +A_l \lambda ^l\) is a matrix polynomial with the Jordan pair (XT). We have to show that \(L(\lambda )\) is symmetric, i.e. \(A_0=A_l\), \(A_1=A_{l-1}\)..., \(A_k=A_{l-k}\) or we have to show that

$$\begin{aligned} L(\lambda )=\lambda ^lL(\frac{1}{\lambda }). \end{aligned}$$

Assume the contrary that \(L(\lambda )\ne \lambda ^lL(\frac{1}{\lambda })\), or the matrix polynomial \(N(\lambda )=L(\lambda )-\lambda ^lL(\frac{1}{\lambda })\ne 0\). The sum of algebraic multiplicities of eigenvalues of a matrix polynomial will be the degree of its Determinant [1]. Since \(N(\lambda )\) is a nonzero matrix polynomial of degree l and order n, the sum of algebraic multiplicities of the eigenvalues of \(N(\lambda )\) will be less than or equal to nl . Now we will show that if \(N(\lambda )\ne 0\), then the total algebraic multiplicity exceeds nl, which is a contradiction.

We claim that \(\lambda _i\) and \(\frac{1}{\lambda _i}\) \((i=1,2\cdots \frac{nl}{2})\) are eigenvalues of \(N(\lambda )\) with same eigenvectors they had for \(L(\lambda )\). To prove this, suppose that \(\lambda _i\) is an eigenvalue of \(L(\lambda )\) with eigenvector \(x_i\) for some i. Then we have \(L(\lambda _i)x_i=0\) and \(L(\frac{1}{\lambda _i})x_i=0\). Now,

$$\begin{aligned} N(\lambda _i)x_i&=(L(\lambda _i)-\lambda _i^lL(\frac{1}{\lambda _i}))x_i\\ &=L(\lambda _i)x_i-\lambda _i^lL(\frac{1}{\lambda _i})x_i=0 , \end{aligned}$$

and

$$\begin{aligned} N(\frac{1}{\lambda _i})x_i&=(L(\frac{1}{\lambda _i})-\frac{1}{\lambda _i^l}L(\lambda _i))x_i\\ &=L(\frac{1}{\lambda _i})x_i-\frac{1}{\lambda _i^l}L(\lambda _i)x_i=0. \end{aligned}$$

Thus \(\lambda _i\) and \(\frac{1}{\lambda _i}\) are eigenvalues of \(N(\lambda )\) for \(i=1,2\cdots \frac{nl}{2}\). Thus we get a total of \(\frac{nl}{2}+\frac{nl}{2}=nl\) eigenvalues for \(N(\lambda )\). Now \(N(\lambda )\) is given by

$$\begin{aligned} N(\lambda )&=(A_0-A_l)+(A_1-A_{l-1})\lambda +(A_2-A_{l-2})\lambda ^2+\cdots +(A_{l-2}-A_2)\lambda ^{l-2}\nonumber \\ &\quad+(A_{l-1}-A_1)\lambda ^{l-1}+(A_{l}-A_0)\lambda ^{l}. \end{aligned}$$
(3.7)

Then

$$\begin{aligned} N(1)&=(A_0-A_l)+(A_1-A_{l-1})+(A_2-A_{l-2})+\cdots +(A_{l-2}-A_2)\nonumber \\ &\quad+(A_{l-1}-A_1)+(A_{l}-A_0)=0. \end{aligned}$$

\(\Rightarrow N(1)y_i=0,\) for linearly independent eigenvectors \(y_i\), \(i=1,2\cdots n\).

i.e. 1 is an eigenvalue of \(N(\lambda )\) with algebraic multiplicity n. Then the sum of algebraic multiplicities of eigenvalues of \(N(\lambda )\) is at least \(nl+n\) (there can be other eigenvalues also), which is not possible since the sum of algebraic multiplicities of all eigenvalues of \(N(\lambda )\) should not exceed nl. Hence our assumption that \(L(\lambda )\ne \lambda ^lL(\frac{1}{\lambda })\) is false. We can conclude that \(L(\lambda )= \lambda ^lL(\frac{1}{\lambda })\), i.e. the matrix polynomial \(L(\lambda )\) is symmetric. \(\square\)

We can construct symmetric matrix polynomials of even degree using the above result. To construct symmetric matrix polynomials of odd degree, we have to select the Jordan pair (XT) with minor changes. For any symmetric matrix polynomial \(L(\lambda )\) of odd degree, we can easily verify that \(L(-1)=0\). Then, we have \(L(-1)p_i=0\), for linearly independent eigenvectors \(p_i\) where \(i=1,2\cdots n\). Hence -1 is an eigenvalue of \(L(\lambda )\) with multiplicity n. Incorporating this change, we state the preceding result for odd values of l.

Theorem 3.2

Let (X,T) be a Jordan pair where \(X\in n\times nl\) and T is a diagonal matrix of order nl, \(n\in \mathbb {N}\), \(n\ge 2\) and l is odd. T has only nonzero elements neither of which equals 1. Assume that the eigenvalues in T occur in reciprocals with same eigenvectors in X. Also, −1 is an eigenvalue in T with multiplicity n. Then a matrix polynomial with Jordan pair (X,T) is symmetric.

Proof

Given that −1 occurs n times in T, then there will be \(nl-n\) eigenvalues in T other than −1. Given that they occur in reciprocals, i.e. if \(\lambda _i\) is an eigenvalue in T, then \(\frac{1}{\lambda _i}\) is also an eigenvalue in T. Thus we have, for \(i=1,2\cdots \frac{nl-n}{2}\), \(\lambda _i\) and its reciprocal \(\frac{1}{\lambda _i}\) are eigenvalues in T, and together they constitute \(nl-n\) eigenvalues (Here \(nl-n\) is always even since l is odd). Now, let \(L(\lambda )=A_0+A_1 \lambda +A_2 \lambda ^2+\cdots +A_l \lambda ^l\) be a matrix polynomial with Jordan pair (XT), we have to show that

$$\begin{aligned} L(\lambda )=\lambda ^lL(\frac{1}{\lambda }). \end{aligned}$$

As we did in the last proof, assume the contrary that \(L(\lambda )\ne \lambda ^lL(\frac{1}{\lambda })\), or the matrix polynomial \(N(\lambda )=L(\lambda )-\lambda ^lL(\frac{1}{\lambda })\ne 0\). Since \(N(\lambda )\) is a nonzero matrix polynomial of degree l and order n, the sum of algebraic multiplicities of the eigenvalues of \(N(\lambda )\) can be maximum nl. We claim that \(\lambda _i\) and \(\frac{1}{\lambda _i}\) \((i=1,2\cdots \frac{nl-n}{2})\) are eigenvalues of \(N(\lambda )\) with same eigenvectors they had for \(L(\lambda )\). To prove this, suppose that \(\lambda _i\) is an eigenvalue of \(L(\lambda )\) with eigenvector \(x_i\) for some i. Then we have, \(L(\lambda _i)x_i=0\) and \(L(\frac{1}{\lambda _i})x_i=0\). Now,

$$\begin{aligned} N(\lambda _i)x_i&=(L(\lambda _i)-\lambda _i^lL(\frac{1}{\lambda _i}))x_i\\ &=L(\lambda _i)x_i-\lambda _i^lL(\frac{1}{\lambda _i})x_i=0, \end{aligned}$$

and

$$\begin{aligned} N(\frac{1}{\lambda _i})x_i&=(L(\frac{1}{\lambda _i})-\frac{1}{\lambda _i^l}L(\lambda _i))x_i\\ &=L(\frac{1}{\lambda _i})x_i-\frac{1}{\lambda _i^l}L(\lambda _i)x_i=0. \end{aligned}$$

i.e. \(\lambda _i\) and \(\frac{1}{\lambda _i}\) are eigenvalues of \(N(\lambda )\) for \(i=1,2\cdots \frac{nl-n}{2}\). Thus, we get a total of \(\frac{nl-n}{2}+\frac{nl-n}{2}=nl-n\) eigenvalues for \(N(\lambda )\). Since −1 is an eigenvalue of \(L(\lambda )\) with algebraic multiplicity n, \(L(-1)p_i=0\) for linearly independent eigenvectors \(p_i\), \(i=1,2\cdots n\). Then we have,

$$\begin{aligned} N(-1)p_i=(L(-1)-(-1)^lL(\frac{1}{-1}))p_i=L(-1)p_i-(-1)^lL(-1)p_i=0, \end{aligned}$$

for \(i=1,2\cdots n\).

i.e. −1 is an eigenvalue of \(N(\lambda )\) with algebraic multiplicity n. Then, the sum of algebraic multiplicities of eigenvalues of \(N(\lambda )\) is at least \(nl-n+n=nl\). Now, \(N(\lambda )\) is given by

$$\begin{aligned} N(\lambda )=(A_0-A_l)+(A_1-A_{l-1})\lambda +(A_2-A_{l-2})\lambda ^2+\cdots +(A_{l-2}-A_2)\lambda ^{l-2}\nonumber \\ +(A_{l-1}-A_1)\lambda ^{l-1}+(A_{l}-A_0)\lambda ^{l} . \end{aligned}$$

Then

$$\begin{aligned} N(1)=(A_0-A_l)+(A_1-A_{l-1})+(A_2-A_{l-2})+\cdots +(A_{l-2}-A_2)\\ +(A_{l-1}-A_1)+(A_{l}-A_0)=0 \end{aligned}$$

\(\Rightarrow N(1)y_i=0\) for linearly independent eigenvectors \(y_i\), \(i=1,2\cdots n\).

i.e. 1 is an eigenvalue of \(N(\lambda )\) with algebraic multiplicity n. Then, the sum of algebraic multiplicities of eigenvalues of \(N(\lambda )\) is at least \(nl+n\) (there can be other eigenvalues also), which is not possible since the sum of algebraic multiplicities of all eigenvalues of \(N(\lambda )\) cannot exceed nl. Hence, our assumption that \(L(\lambda )\ne \lambda ^lL(\frac{1}{\lambda })\) is false. We conclude that \(L(\lambda )= \lambda ^lL(\frac{1}{\lambda })\), i.e. the matrix polynomial \(L(\lambda )\) is symmetric. \(\square\)

Construction of multiscaling function

We have obtained the properties of the spectral data of a matrix polynomial so that it is a symbol function of a symmetric multiscaling function \(\varPhi\). Since each entry in this symbol function is a trigonometric polynomial (algebraic polynomial in \(z=e^{-i\xi }\)), the associated multiscaling function is compactly supported [6]. We will obtain the multiscaling function by employing the cascade algorithm [3]. The cascade algorithm will converge if the multiscaling coefficients satisfy certain properties and if the initial functions are appropriately chosen. Let \(H_0\), \(H_1\), \(H_2\), \(H_3\) \(\in \mathbb {C}^{2\times 2}\) be the set of multiscaling coefficients and define the \(8\times 8\) matrix D as

$$\begin{aligned} D=\left( \begin{array}{cccc} H_0&{}0&{}0&{}0 \\ H_2&{}H_1&{}H_0&{}0 \\ 0&{}H_3 &{} H_2 &{} H_1 \\ 0 &{} 0 &{} 0 &{} H_3 \\ \end{array} \right) . \end{aligned}$$
(4.1)

Let \(D_0\) be the \(3\times 3\) sub block matrix of D at the top left, \(D_k\) is the sub matrix ‘k’ steps to the left. Then

$$\begin{aligned} D_0=\left( \begin{array}{ccc} H_0 &{} 0 &{} 0 \\ H_2 &{} H_1&{} H_0 \\ 0 &{} H_3 &{} H_2\\ \end{array} \right) \end{aligned}$$
(4.2)

and

$$\begin{aligned} D_1=\left( \begin{array}{ccc} H_1 &{} H_0 &{} 0 \\ H_3 &{} H_2&{} H_1 \\ 0 &{} 0 &{} H_3\\ \end{array} \right) . \end{aligned}$$
(4.3)

Definition 4.1

[3] The recursion coefficients \({H_k}\) of a matrix refinement equation with dilation factor m satisfy the sum rules of order p if there exist vectors \(y_0\), \(y_1\)...\(y_{p-1}\) with \(y_0\ne 0\) such that

$$\begin{aligned} \sum _{t=0}^{n} \left( {\begin{array}{c}n\\ t\end{array}}\right) m^t (-i)^{n-t} y_t D^{n-t}H(\frac{2\pi s}{m})= \delta _{0s}y_n, s=0,1,2\ldots m-1, \end{aligned}$$
(4.4)

for n = 0...p−1.

Theorem 4.1

[3] Assume that \(H(\xi )\) satisfies the sum rules of order 1, and the joint spectral radius

$$\begin{aligned} \rho (D_0|F_1,D_1|F_1\ldots D_{m-1}|F_1)=\lambda <1, \end{aligned}$$

where \(F_1\) is the orthogonal complement of the common left eigenvector \(e^*=(\mu _0^*,\mu _0^*\ldots \mu _0^*)\) of \(D_0\), \(D_1\)...\(D_{m-1}\). Then, the cascade algorithm has a unique solution \(\varPhi\) which is Holder continuous of order \(\alpha\) for any \(\alpha < -log_m\lambda\).

A method to find \(H(\xi )\) which satisfies the sum rules of order 1 by suitably selecting the Jordan pair (XT) is given in [2]. Based on that method, we construct the symbol function \(H(\xi )\) so that it satisfies the sum rules of order 1. While finding the multiscaling coefficients \(H_k\), we ensure that the value of the joint spectral radius

$$\begin{aligned} \rho (D_0|F_1,D_1|F_1\ldots D_{m-1}|F_1) \end{aligned}$$

is less than 1. Then by Theorem 4.1, the cascade algorithm converges for the set of multiscaling coefficients \(H_k\). An example of this construction is given as follows. We will start with a Jordan pair (XT) satisfying the conditions

  1. 1.

    \(I-T\) is of full rank (Theorem 2.2)

  2. 2.

    The eigenvalues are nonzero and not equal to 1. They occur in reciprocals with same eigenvectors. Also, −1 is an eigenvalue with multiplicity n (in the following example, n = 2) (Theorem 3.2)

Without loss of generality, we take \(X\in \mathbb {C}^{2\times 6}\) and \(T\in \mathbb {C}^{6\times 6}\) satisfying the above listed conditions, and are given by,

$$\begin{aligned} X= \left( \begin{array}{cccccc} 0.2785 &{} 0.2785 &{} 0.9575 &{} 0.9575 &{} 1.0000 &{} 0.8003\\ 0.5469 &{} 0.5469 &{} 0.9649 &{} 0.964 &{} 0 &{} 0.1419 \end{array}\right) \end{aligned}$$

and

$$\begin{aligned} T=\left( \begin{array}{cccccc} 34 &{} 0 &{} 0 &{}0 &{} 0&{} 0\\ 0 &{} 0.0294 &{} 0 &{} 0&{}0&{}0 \\ 0 &{} 0 &{} -33 &{} 0&{} 0 &{}0\\ 0 &{} 0 &{} 0 &{} -0.0303 &{} 0&{}0 \\ 0 &{} 0&{} 0 &{} 0&{}-1&{}0 \\ 0 &{} 0 &{} 0 &{} 0&{}0&{}-1 \\ \end{array} \right) . \end{aligned}$$

It can be verified that \(I-T\) is of rank 6, then there exists a matrix polynomial with Jordan pair (XT) which is a symbol function of a multiscaling function vector \(\varPhi\) (Theorem 2.2). Since (XT) satisfies the conditions in Theorem 3.2, the matrix polynomial \(H(\xi )\) must be symmetric. By employing the procedure to find the multiscaling coefficients \(H_k\) given in [2], we have obtained the multiscaling coefficients as follows.

$$\begin{aligned} H_0= \left( \begin{array}{cc} 0.0647 &{} -0.0442\\ 0.0525 &{} -0.0400 \end{array}\right) , \end{aligned}$$
$$\begin{aligned} H_1= \left( \begin{array}{cc} 0.6424 &{} 0.0442\\ -0.0525 &{} 0.4642 \end{array} \right) , \end{aligned}$$
$$\begin{aligned} H_2= \left( \begin{array}{cc} 0.6424 &{} 0.0442\\ -0.0525 &{} 0.4642 \end{array}\right) , \end{aligned}$$
$$\begin{aligned} H_3= \left( \begin{array}{cc} 0.0647 &{} -0.0442\\ 0.0525 &{} -0.0400 \end{array}\right) . \end{aligned}$$

We get

$$\begin{aligned} H(0)=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 0.6 \\ \end{array}\right) \end{aligned}$$

and

$$\begin{aligned} H(-1)=\left( \begin{array}{cc} 0 &{} 0 \\ 0 &{} 0 \\ \end{array} \right) . \end{aligned}$$

For \(\begin{aligned} y_0=[ \begin{array}{cc} 1 &{} 0 \end{array} ] \end{aligned}\), we get \(y_0H(0)=y_0\) and \(\begin{aligned} y_0H(-1)=[ \begin{array}{cc} 0 &{} 0 \end{array} ] \end{aligned}\). Thus, H satisfies the sum rules of order 1.

Now, our attempt is to find the multiscaling function vector \(\varPhi\) corresponding to the above multiscaling coefficients. For that, we need to ensure that the conditions in Theorem 4.1 are satisfied. The matrices D, \(D_0\) and \(D_1\) associated to H are given by

$$\begin{aligned} D=\left( \begin{array}{llllllll} 0.0915&{}-0.0625&{}0&{}0&{}0&{}0&{}0&{}0\\ 0.0742&{} -0.0565&{}0&{}0&{}0&{}0&{}0&{}0\\ 0.9085 &{}0.0625 &{} 0.9085 &{} 0.0625 &{} 0.0915 &{} -0.0625 &{}0&{}0\\ -0.0742&{}0.6565 &{} -0.0742 &{} 0.6565 &{} 0.0742 &{} -0.0565&{}0&{}0\\ 0&{}0&{}0.0915 &{} -0.0625 &{} 0.9085 &{} 0.0625 &{} 0.9085 &{} 0.0625\\ 0&{}0&{}0.0742 &{} -0.0565 &{} -0.0742 &{} 0.6565 &{} -0.0742 &{} 0.6565\\ 0&{}0&{}0&{}0&{}0&{}0&{}0.0915 &{} -0.0625\\ 0&{}0&{}0&{}0&{}0&{}0&{}0.0742 &{} -0.0565\\ \end{array}\right) , \end{aligned}$$
$$\begin{aligned} D_0= \left( \begin{array}{llllllll} 0.0915&{}-0.0625&{}0&{}0&{}0&{}0\\ 0.0742&{} -0.0565&{}0&{}0&{}0&{}0\\ 0.9085 &{}0.0625 &{} 0.9085 &{} 0.0625 &{} 0.0915 &{} -0.0625 \\ -0.0742&{}0.6565 &{} -0.0742 &{} 0.6565 &{} 0.0742 &{} -0.0565\\ 0&{}0&{}0.0915 &{} -0.0625 &{} 0.9085 &{} 0.0625 \\ 0&{}0&{}0.0742 &{} -0.0565 &{} -0.0742 &{} 0.6565 \end{array}\right) , \end{aligned}$$
$$\begin{aligned} D_1=\left( \begin{array}{llllllll} 0.9085 &{} 0.0625 &{} 0.0915 &{} -0.0625 &{}0&{}0\\ -0.0742 &{} 0.6565 &{} 0.0742 &{} -0.0565&{}0&{}0\\ 0.0915 &{} -0.0625 &{} 0.9085 &{} 0.0625 &{} 0.9085 &{} 0.0625\\ 0.0742 &{} -0.0565 &{} -0.0742 &{} 0.6565 &{} -0.0742 &{} 0.6565\\ 0&{}0&{}0&{}0&{}0.0915 &{} -0.0625\\ 0&{}0&{}0&{}0&{}0.0742 &{} -0.0565\\ \end{array}\right) , \end{aligned}$$
$$\begin{aligned} \rho (D_0|F_1,D_1|F_1)=0.7757<1, \end{aligned}$$

where \(F_1\) is the orthogonal complement of the common left eigenvector

$$\begin{aligned}\left( \begin{array}{c} 0.5774\\ 0.0000 \\ 0.5774 \\ 0.0000 \\ 0.5774\\ 0.0000 \end{array}\right) \end{aligned}$$

of \(D_0\), \(D_1\). Also H satisfies the sum rules of order 1. Thus H satisfies the conditions in Theorem 4.1. The cascade algorithm will converge to a multiscaling function \(\varPhi\) which is Holder continuous if the initial function is chosen as a piecewise linear function that interpolates on the set of integers [3]. The values of \(\varPhi^0\) at the integers are obtained by finding the 1-eigenvector of the matrix D. Since the symbol function H is a matrix polynomial of degree 3, the support\((\varPhi^0 )\) is contained in [0,3]. We have to find the values of \(\varPhi^0\) at the points 0,1,2 and 3. For that, find the 1-eigenvector of the matrix D, which is given by

$$\begin{aligned}\left( \begin{array}{c} 0.0000\\ 0.0000\\ 0.7071\\ 0.0000\\ 0.7071\\ 0.0000\\ 0.0000\\ 0.0000 \end{array} \right) =\left( \begin{array}{c} \varPhi ^{0}(0) \\ \varPhi ^{0}(1) \\ \varPhi ^{0}(2) \\ \varPhi ^{0}(3) \\ \end{array} \right) . \end{aligned}$$

Now choose the initial function \(\varPhi ^{0}\) as the piecewise linear function that interpolates at the points 0, 1, 2 and 3. Then, the cascade algorithm will converge to the solution \(\varPhi\) (Fig. 1) which is compactly supported in [0,3] and is symmetric about the point 1.5.

Fig. 1
figure 1

The two components \(\phi _0\) and \(\phi _1\) of the multiscaling function \(\varPhi\) corresponding to the obtained multiscaling coefficients. Here both components \(\phi _0\) and \(\phi _1\) are symmetric and are compactly supported in [0,3]

The obtained multiscaling function is compactly supported and symmetric. Thus, we are able to construct a compactly supported multiscaling function which is symmetric about the point 1.5.

Construction of pseudo biorthogonal symmetric multiscaling functions

In the previous sections, we constructed a symbol function H(z) and the corresponding multiscaling function \(\varPhi (x)\) which is symmetric and compactly supported. Now, we have to construct the dual multiscaling function \(\tilde{\varPhi }(x)\). For that, we have to find the dual symbol F(z) corresponding to H(z) so that the generalized condition of perfect reconstruction (2.14) holds. For a given symbol H(z), a dual symbol F(z) satisfying Eq. (2.14) exists if Determinants of H(z) and \(H(-z)\) do not have common roots [5]. Now, the Jordan pair (XT) that we selected for constructing H(z) is given by,

$$\begin{aligned} X= \left( \begin{array}{cccccc} 0.2785 &{} 0.2785 &{} 0.9575 &{} 0.9575 &{} 1.0000 &{} 0.8003\\ 0.5469 &{} 0.5469 &{} 0.9649 &{} 0.964 &{} 0 &{} 0.1419 \end{array}\right) \end{aligned}$$

and

$$\begin{aligned} T=\left( \begin{array}{cccccc} 34 &{} 0 &{} 0 &{}0 &{} 0&{} 0\\ 0 &{} 0.0294 &{} 0 &{} 0&{}0&{}0 \\ 0 &{} 0 &{} -33 &{} 0&{} 0 &{}0\\ 0 &{} 0 &{} 0 &{} -0.0303 &{} 0&{}0 \\ 0 &{} 0&{} 0 &{} 0&{}-1&{}0 \\ 0 &{} 0 &{} 0 &{} 0&{}0&{}-1 \\ \end{array}\right) . \end{aligned}$$

The diagonal entries of T are the eigenvalues of the obtained symbol function H(z). Looking at the diagonal entries of T, it is clear that negative of an eigenvalue is not again an eigenvalue. Since eigenvalues of H(z) are precisely the roots of the Determinant of H(z), we can say that negative of a root of Determinant of H(z) is not again its root. In other words, Determinants of H(z) and \(H(-z)\) do not have common roots. Thus, there exists a dual symbol F(z) so that the generalized condition of perfect reconstruction is satisfied. Using the cofactor method given in [5], we got the dual symbol function F(z) corresponding to H(z) as

$$\begin{aligned} F(z)=F_{-3}z^{-3}+F_{-2}z^{-2}+F_{-1}z^{-1}+F_{0}+F_{1}z+F_{2}z^{2}+F_{3}z^{3}+F_{4}z^{4}+F_{5}z^{5}, \end{aligned}$$

where

$$\begin{aligned} F_{-2}&=e^{-4} \left[ \begin{array}{cc} -0.4660 &{} -0.6116\\ 0.5149 &{} 0.7537 \end{array}\right] ,\\ F_{-1}&=e^{-3} \left[ \begin{array}{cc} 0.5936 &{} 0.1305\\ -0.1099 &{} 0.6629 \end{array}\right] ,\\ F_0&= \left[ \begin{array}{cc} -0.0401 &{} -0.0525\\ 0.0442 &{} 0.0646 \end{array}\right] , \end{aligned}$$
$$\begin{aligned} F_1= \left[ \begin{array}{cc} 0.4638 &{} 0.0525\\ -0.0442 &{} 0.6419 \end{array}\right] , F_2= \left[ \begin{array}{cc} 0.4638 &{} 0.0525\\ -0.0442 &{} 0.6419 \end{array}\right] , F_3= \left[ \begin{array}{cc} -0.0401 &{} -0.0525\\ 0.0442 &{} 0.0646 \end{array}\right] , \end{aligned}$$
$$\begin{aligned} F_4=e^{-3} \left[ \begin{array}{cc} 0.5936 &{} 0.1305\\ -0.1099 &{} 0.6629 \end{array}\right] , F_5=e^{-4} \left[ \begin{array}{cc} -0.4660 &{} -0.6116\\ 0.5149 &{} 0.7537 \end{array}\right] . \end{aligned}$$

Here \(F_5=F_{-2}\), \(F_4=F_{-1}\), \(F_3=F_{0}\), \(F_2=F_{1}\), i.e. The properties of the multiscaling coefficients \(H_k\) which enable the symmetricity of \(\varPhi (x)\) are preserved and hence the dual multiscaling function \(\tilde{\varPhi }(x)\) will also be symmetric. Since the entries in the matrix coefficients \(F_{-2}\), \(F_{-1}\), \(F_4\), \(F_5\) are very small, the support of \(\tilde{\varPhi }(x)\) will be almost similar to that of \(\varPhi (x)\). The components of the dual multiscaling function \(\tilde{\varPhi }(x)\) are given in Fig. 2.

Fig. 2
figure 2

The two components \(\tilde{\phi }_0\) and \(\tilde{\phi }_1\) of the dual multiscaling function \(\tilde{\varPhi }\). Here \(\tilde{\phi }_0\) and \(\tilde{\phi }_1\) are symmetric and compactly supported

Thus we have obtained a symmetric and compactly supported dual multiscaling function \(\tilde{\varPhi }(x)\) so that the functions \(\varPhi (x)\) and \(\tilde{\varPhi }(x)\) form a pair of pseudo-biorthogonal multiscaling functions.

Results

  1. 1.

    We defined a symmetric matrix polynomial analogous to symmetric scalar polynomials (Definition 3.1)

  2. 2.

    If the symbol \(H(\xi )\) is a symmetric matrix polynomial of degree l, then the corresponding multiscaling function \(\varPhi\) will be symmetric about the point \(\frac{l}{2}\) (Lemma 3.1)

  3. 3.

    The eigenvalues of a symmetric matrix polynomial \(L(\lambda )\) occur in reciprocals with same eigenvectors, i.e. If \(\lambda _0\) is an eigenvalue of \(L(\lambda )\) with eigenvector \(x_0\), then \(\frac{1}{\lambda _0}\) is also an eigenvalue of \(L(\lambda )\) with same eigenvector \(x_0\) (Lemma 3.2)

  4. 4.

    Let (XT) be a Jordan pair where \(X\in n\times nl\) and T is a diagonal matrix of order nl, \(n\in \mathbb {N}\), \(n\ge 2\) and l is even. Assume that T has only nonzero elements neither of which equals 1 and eigenvalues of T occur in reciprocals with same eigenvectors in X. Then a matrix polynomial with Jordan pair (XT) is symmetric (Theorem 3.1)

  5. 5.

    In the above result, if l is an odd number then also the matrix polynomial is symmetric provided that −1 is an eigenvalue of T (or diagonal entry in T) with algebraic multiplicity n (Theorem 3.2)

Conclusions

We have found the necessary as well as sufficient conditions on a Jordan pair (XT) such that the corresponding matrix polynomial \(H(\xi )\) is symmetric. We selected a Jordan pair satisfying these conditions and constructed a symmetric matrix polynomial \(H(\xi )\). Using cascade algorithm, we found the multiscaling function \(\varPhi\) for which the matrix polynomial \(H(\xi )\) acts as a symbol function. Since \(H(\xi )\) is a symmetric matrix polynomial, we saw that \(\varPhi\) is also symmetric. Finally we constructed the dual multiscaling function \(\tilde{\varPhi }(x)\) which is symmetric and compactly supported so that the functions \(\varPhi (x)\) and \(\tilde{\varPhi }(x)\) form a pair of pseudo-biorthogonal multiscaling functions.