1 Introduction

Let \(H^2\) be the classical Hardy space. The space \(H^2\) consists of all functions \({f(z)=\sum _{k=0}^{\infty }a_kz^k}\) analytic in the unit disk \(\mathbb {D}=\{z\in \mathbb {C}:|z|<1\}\) and such that \({\sum _{k=0}^{\infty }|a_k|^2<\infty }\). It can also be identified (via radial limits) with the closed linear span of analytic polynomials in \(L^2=L^2(\mathbb {T})\), \(\mathbb {T}=\partial \mathbb {D}\). Let P be the orthogonal projection from \(L^2\) onto \(H^2\).

For \(\varphi \in L^2\), the Toeplitz operator \(T_{\varphi }\) is defined on the set of all bounded analytic functions \(H^{\infty }\subset H^2\) by

$$\begin{aligned} T_{\varphi }f=P(\varphi f),\quad f\in H^{\infty }. \end{aligned}$$

Since \(H^{\infty }\) is a dense subset of the Hardy space, the operator \(T_{\varphi }\) is densely defined. Moreover, it can be extended to a bounded linear operator \(T_{\varphi }:H^2\rightarrow H^2\) if and only if \(\varphi \in L^{\infty }=L^{\infty }(\mathbb {T})\). Toeplitz operators have many applications and are well-studied (for more details and properties see for example [2, 19]). Two examples of these operators are the unilateral shift \(S=T_z\) and the backward shift \(S^{*}=T_{\overline{z}}\).

The Toeplitz operator \(T_{\varphi }\) can be seen as a compression to \(H^2\) of the multiplication operator \(f\mapsto \varphi f\) defined on \(L^2\). Recently, compressions of multiplication operators to model spaces have been intensely studied. A model space is a closed subspace of \(H^2\) of the form \(K_{\alpha }=H^2\ominus \alpha H^2\), where \(\alpha \) is an inner function (\(\alpha \in H^{\infty }\) and \(|\alpha |=1\) a.e. on \(\mathbb {T}=\partial \mathbb {D}\)). Model spaces are the typical \(S^{*}\)-invariant subspaces of \(H^2\). For each \(w\in \mathbb {D}\) the point evaluation functional \(f\mapsto f(w)\) is bounded on \(K_{\alpha }\) and so there exists a kernel function \(k_{w}^{\alpha }\in K_{\alpha }\) with the reproducing property \(f(w)=\langle f, k_{w}^{\alpha }\rangle \) for every \(f\in K_{\alpha }\). We have

$$\begin{aligned} k_{w}^{\alpha }(z)=\frac{1-\overline{\alpha (w)}\alpha (z)}{1-\overline{w}z},\quad z\in \mathbb {D}. \end{aligned}$$
(1.1)

Moreover, \( K_{\alpha }\) is preserved by the conjugation \(C_{\alpha }:L^2\rightarrow L^2\) (an antilinear, isometric involution) defined on \(L^2\) by the formula

$$\begin{aligned} C_{\alpha }f(z)=\alpha (z)\overline{z}\overline{f(z)},\quad |z|=1. \end{aligned}$$

Note that the conjugate kernel function \({\widetilde{k}}_{w}^{\alpha }=C_{\alpha }{k}_{w}^{\alpha }\), \(w\in \mathbb {D}\), is given by

$$\begin{aligned} {\widetilde{k}}_{w}^{\alpha }(z)=\frac{\alpha (z)-\alpha (w)}{z-w},\quad z\in \mathbb {D}. \end{aligned}$$

A thorough account of the subject of model spaces can be found in [11].

Let \(\alpha \) be an inner function and let \(P_{\alpha }\) denote the orthogonal projection from \(L^2\) onto \(K_{\alpha }\). The truncated Toeplitz operator \(A_{\varphi }^{\alpha }\), \(\varphi \in L^2\), is defined by

$$\begin{aligned} A_{\varphi }^{\alpha }f=P_{\alpha }(\varphi f),\quad f\in K_{\alpha }^{\infty }=K_{\alpha }\cap H^{\infty }. \end{aligned}$$

More generally, for two inner functions \(\alpha \) and \(\beta \), the asymmetric truncated Toeplitz operator \(A_{\varphi }^{\alpha ,\beta }\), \(\varphi \in L^2\), is defined by

$$\begin{aligned} A_{\varphi }^{\alpha ,\beta }f=P_{\beta }(\varphi f),\quad f\in K_{\alpha }^{\infty }. \end{aligned}$$

Clearly, \(A_{\varphi }^{\alpha ,\alpha }=A_{\varphi }^{\alpha }\). The operator \(A_{\varphi }^{\alpha ,\beta }\) is densely defined since \(K_{\alpha }^{\infty }\) is a dense subset of \(K_{\alpha }\). Let

$$\begin{aligned} {\mathscr {T}}(\alpha ,\beta )=\{A_{\varphi }^{\alpha ,\beta }\ :\ \varphi \in L^2\ \mathrm {and}\ A_{\varphi }^{\alpha ,\beta }\ \text {extends boundedly to }K_{\alpha }\} \end{aligned}$$

and \({\mathscr {T}}(\alpha )={\mathscr {T}}(\alpha ,\alpha )\).

Truncated Toeplitz operators were introduced in 2007 in D. Sarason’s paper [20]. Extensive study of this class of operators revealed many interesting properties and applications of truncated Toeplitz operators. For example, a truncated Toeplitz operator is not uniquely determined by its symbol; it is \(C_{\alpha }\)–symmetric [20]; the fact that it can be boundedly extended to \(K_{\alpha }\) does not imply that it has a symbol from \(L^{\infty }\) [1]. We refer the reader to the survey [10] for more results and references (see also [9]). A natural generalization of truncated Toeplitz operators, asymmetric truncated Toeplitz operators, were introduced more recently in [3, 4] and [14].

Among the results on truncated Toeplitz operators are characterizations given by J.A. Cima, W.T. Ross and W.R. Wogen in [8]. The authors in [8] described truncated Toeplitz operators on finite-dimensional model spaces in terms of matrix representations with respect to some natural bases.

It is known that \(\text {dim}K_{\alpha }=m<\infty \) if and only if \(\alpha \) is a finite Blaschke product with m (not necessarily distinct) zeros \(a_1,\ldots , a_m\in \mathbb {D}\) [11, Chapter 5]. In that case, each \(f\in K_{\alpha }\) can be written as

$$\begin{aligned} f(z)=\frac{q(z)}{(1-{\overline{a}}_1z)\cdots (1-{\overline{a}}_mz)}, \end{aligned}$$

where q is a polynomial of degree at most \(m-1\) (and so f is analytic in an open set containing the closure of \(\mathbb {D}\) and here \(K_{\alpha }=K_{\alpha }^{\infty }\)). In particular, if \(\alpha (z)=z^m\), then \(K_{\alpha }\) is the set of all polynomials of degree at most \(m-1\).

It is also known that if \(\text {dim}K_{\alpha }=m<\infty \), then the set of kernel functions \(\{k_{w_1}^{\alpha },\ldots ,k_{w_m}^{\alpha }\}\) corresponding to any distinct points \(w_1,\ldots , w_m\in \mathbb {D}\) is linearly independent (for proof see [20, proof of Thm. 7.1(b)] and thus a (non–orthogonal) basis for \(K_{\alpha }\). Since \(C_{\alpha }\) is an antilinear isometry, the set of conjugate kernel functions \(\{{\widetilde{k}}_{w_1}^{\alpha },\ldots ,{\widetilde{k}}_{w_m}^{\alpha }\}\) is also a basis for \(K_{\alpha }\). In particular, if the zeros \(a_1,\ldots , a_m\) are distinct, we have the kernel function basis \({\mathcal {K}}^{\alpha }_m=\{k_{a_1}^{\alpha },\ldots ,k_{a_m}^{\alpha }\}\) and the conjugate kernel function basis \(\widetilde{{\mathcal {K}}}^{\alpha }_m=\{{\widetilde{k}}_{a_1}^{\alpha },\ldots ,{\widetilde{k}}_{a_m}^{\alpha }\}\). Note that in that case, for each \(j\in \{1,\ldots ,m\}\),

$$\begin{aligned} {k}_{a_j}^{\alpha }(z)=\frac{1}{1-{\overline{a}}_jz}\quad \text {and}\quad {\widetilde{k}}_{a_j}^{\alpha }(z)=\frac{\alpha (z)}{z-a_j}. \end{aligned}$$

If \(\text {dim}K_{\alpha }=m<\infty \), then \(\alpha \), as a finite Blaschke product, is analytic in an open set containing the closure of \(\mathbb {D}\). For any \(\eta \in \mathbb {T}\) one can consider \({k}_{\eta }^{\alpha }\) defined by (1.1) with w replaced by \(\eta \). It turns out that this \({k}_{\eta }^{\alpha }\) also belongs to \(K_{\alpha }\) and \(f(\eta )=\langle f, k_{\eta }^{\alpha }\rangle \) for every \(f\in K_{\alpha }\). Moreover,

$$\begin{aligned} \Vert k_{\eta }^{\alpha }\Vert ^2=k_{\eta }^{\alpha }(\eta )=\eta \overline{\alpha (\eta )}\alpha '(\eta )=|\alpha '(\eta )|>0 \end{aligned}$$

and

$$\begin{aligned} {\widetilde{k}}_{\eta }^{\alpha }=C_{\alpha }k_{\eta }^{\alpha }=\overline{\eta }\alpha (\eta )k_{\eta }^{\alpha } \end{aligned}$$

(see [11, Chapter 7] for details). Now for a fixed \(\lambda \in \mathbb {T}\) define

$$\begin{aligned} \alpha _{\lambda }=\frac{\lambda +\alpha (0)}{1+\overline{\alpha (0)}\lambda }. \end{aligned}$$

Then, \(\alpha _{\lambda }\in \mathbb {T}\) and equation \(\alpha (\eta )=\alpha _{\lambda }\) has precisely m distinct solutions \(\eta _1,\ldots , \eta _m\), each from \(\mathbb {T}\) ([12, p. 6]). The corresponding kernel functions \(\{k_{\eta _1}^{\alpha },\ldots ,k_{\eta _m}^{\alpha }\}\) are pairwise orthogonal:

$$\begin{aligned} \langle k_{\eta _i}^{\alpha },k_{\eta _j}^{\alpha } \rangle =k_{\eta _i}^{\alpha }(\eta _j)=\left\{ \begin{array}{c@{\quad }l} \Vert k_{\eta _i}^{\alpha }\Vert ^2>0&{}\text {for }i=j,\\ \frac{1-|\alpha _{\lambda }|^2}{1-\overline{\eta }_i\eta _j}=0&{}\text {for }i\ne j.\end{array}\right. \end{aligned}$$

Hence, the set of normalized kernels

$$\begin{aligned} v_{\eta _j}^{\alpha }=\tfrac{1}{\Vert k_{\eta _j}^{\alpha }\Vert }k_{\eta _j}^{\alpha }=\tfrac{1}{\sqrt{|\alpha '(\eta _j)|}}k_{\eta _j}^{\alpha },\quad j=1,\ldots , m, \end{aligned}$$

is an orthonormal basis for \(K_{\alpha }\). The functions \(v_{\eta _j}^{\alpha }\) can also be obtained as eigenvectors of the Clark operator \(U^{\alpha }_{\lambda }\) - the one-dimensional unitary perturbation of the compressed shift \(S_{\alpha }=A_{z}^{\alpha }\), given by

$$\begin{aligned} U^{\alpha }_{\lambda }=S_{\alpha }+\tfrac{\lambda +\alpha (0)}{1-|\alpha (0)|^2}(k_0^{\alpha }\otimes {\widetilde{k}}_0^{\alpha }) \end{aligned}$$

(see [7]). That is why the basis \({\mathcal {V}}_m^{\alpha }=\{v_{\eta _1}^{\alpha },\ldots ,v_{\eta _m}^{\alpha }\}\) is called the Clark basis (see also [11, Chapter 11]).

It is well known that a bounded linear operator \(T:H^2\rightarrow H^2\) is a Toeplitz operator if and only if its matrix representation with respect to the standard basis is a Toeplitz matrix, that is, an infinite matrix with constant diagonals. It follows that truncated Toeplitz operators on \(K_{\alpha }\), \(\alpha (z)=z^m\), can be described as those linear operators on \(K_{\alpha }\) which are represented by finite Toeplitz matrices (with respect to monomial basis). In [8], the authors characterized the operators from \({\mathscr {T}}(\alpha )\) (for \(\alpha \) such that \(\text {dim}K_{\alpha }=m<\infty \)) using matrix representations of these operators with respect to the kernel basis \({{\mathcal {K}}}^{\alpha }_m\), with respect to the conjugate kernel basis \({\widetilde{{\mathcal {K}}}}^{\alpha }_m\) and with respect to the Clark basis \({\mathcal {V}}^{\alpha }_m\). In each of those cases the matrix representing a truncated Toeplitz operator turns out to be determined by entries from its first (or any other fixed) row and the main diagonal. For some infinite-dimensional cases, these results were generalized in [17].

In [16], the authors generalized the characterizations from [8] to the case of asymmetric truncated Toeplitz operators acting between finite-dimensional model spaces. For example, it was proved in [16] that if \(\text {dim}K_{\alpha }=m<\infty \), \(\text {dim}K_{\beta }=n<\infty \) and the Blaschke products \(\alpha \), \(\beta \) have l common zeros, then a matrix representing an operator \(A\in {\mathscr {T}}(\alpha ,\beta )\) (with respect to \({\mathcal {K}}^{\alpha }_m\) and \({\mathcal {K}}^{\beta }_n\)) is determined by its entries along the first row, first l entries along the main diagonal and last \(n-l\) entries along the first column (first row and first column if \(l=0\)). In this paper, we continue the study of matrix representations of operators from \({\mathscr {T}}(\alpha ,\beta )\) (\(\alpha \), \(\beta \) - finite Blaschke products) and present some new characterizations. The novelty of our approach is that here we consider matrix representations which are computed with respect to two different types of bases (for example the kernel basis in \(K_{\alpha }\) and the conjugate kernel basis in \(K_{\beta }\)). The characterizations thus obtained turn out to be simpler than the ones from [16] as they do not depend on whether or not \(\alpha \) and \(\beta \) have common zeros. Moreover, these results are new in that they were not known even for the case of truncated Toeplitz operators (\(\alpha =\beta \)).

In Sect. 2, we assume that \(\alpha \) and \(\beta \) are two finite Blaschke products, each with distinct zeros. We then characterize operators from \({\mathscr {T}}(\alpha ,\beta )\) using matrix representations computed with respect to the kernel basis in \(K_{\alpha }\) and the conjugate kernel basis in \(K_{\beta }\) (and vice versa). In Sect. 3, we consider matrix representations computed with respect to the kernel (or conjugate kernel) basis in one space (assuming that the zeros of the corresponding finite Blaschke product are distinct) and Clark basis in the other (with no additional assumptions on the zeros of the corresponding finite Blaschke product).

2 Kernel Basis and Conjugate Kernel Basis

In what follows let \(\alpha \) and \(\beta \) be two finite Blaschke products with zeros \(a_1,\ldots , a_m\) and \(b_1,\ldots , b_n\), respectively. In this section we assume that the zeros \(a_1,\ldots , a_m\) are distinct and that the zeros \(b_1,\ldots , b_n\) are also distinct.

We first consider matrix representations with respect to \({\mathcal {K}}^{\alpha }_m=\{k_{a_1}^{\alpha },\ldots ,k_{a_m}^{\alpha }\}\) and \(\widetilde{{\mathcal {K}}}^{\beta }_n=\{{\widetilde{k}}_{b_1}^{\beta },\ldots ,{\widetilde{k}}_{b_n}^{\beta }\}\).

Our reasoning is based on a duality relation between the kernels \(\{{k}_{b_1}^{\beta },\ldots ,{k}_{b_n}^{\beta }\}\) and the conjugate kernels \(\{{\widetilde{k}}_{b_1}^{\beta },\ldots ,{\widetilde{k}}_{b_n}^{\beta }\}\). Recall that for each \(s\in \{1,\ldots , m\}\) we have

$$\begin{aligned} \langle {\widetilde{k}}_{b_j}^{\beta }, {k}_{b_s}^{\beta }\rangle ={\widetilde{k}}_{b_j}^{\beta }(b_s)=\left\{ \begin{array}{l@{\quad }l} {\beta '(b_s)}&{}\mathrm {for}\ j=s,\\ 0&{}\mathrm {for}\ j\ne s.\end{array}\right. \end{aligned}$$
(2.1)

In other words, one can say that the set \(\left\{ \frac{1}{\overline{\beta '(b_1)}}{k}_{b_1}^{\beta },\ldots ,\frac{1}{\overline{\beta '(b_n)}}{k}_{b_n}^{\beta }\right\} \) is biorthogonal to \({\widetilde{{\mathcal {K}}}}^{\beta }_n\). Therefore, if A is a linear transformation from \(K_{\alpha }\) into \(K_{\beta }\), then the element \(r_{s,p}\) of its matrix representation \(M_A=(r_{s,p})\) with respect to \({\mathcal {K}}^{\alpha }_m\) and \(\widetilde{{\mathcal {K}}}^{\beta }_n\) can be obtained by

$$\begin{aligned} r_{s,p}= \tfrac{1}{\beta '(b_s)}\langle Ak_{a_p}^{\alpha }, {k}_{b_s}^{\beta }\rangle , \end{aligned}$$

where \(1\le p\le m\) and \(1\le s\le n\). Indeed, since \(Ak_{a_p}^{\alpha }=\sum \limits _{j=1}^nr_{j,p}{\widetilde{k}}_{b_j}^{\beta }\), the above equality follows from (2.1).

As in the proof of [16, Thm. 3.1], for \(A\in {\mathscr {T}}(\alpha ,\beta )\) we will write \(Ak_{a_p}^{\alpha }\) as a linear combination of \(k_{a_p}^{\beta }\) and \(\{{\widetilde{k}}_{b_1}^{\beta },\ldots ,{\widetilde{k}}_{b_n}^{\beta }\}\). It will then follow from (2.1) that in this case \(r_{s,p}\) (for fixed s and p) is essentially determined by two coefficients of that linear combination, namely the ones corresponding to \(k_{a_p}^{\beta }\) and \({\widetilde{k}}_{b_s}^{\beta }\) (in contrast to the proof of [16, Thm. 3.1], where more of the coefficients were important). This will lead to simpler calculations.

Theorem 2.1

Let \(\alpha \) be a finite Blaschke product with \(m>0\) distinct zeros \(a_1,\ldots , a_m\), let \(\beta \) be a finite Blaschke product with \(n>0\) distinct zeros \(b_1,\ldots , b_n\) and let A be any linear transformation from \(K_{\alpha }\) into \(K_{\beta }\). If \(M_A=(r_{s,p})\) is the matrix representation of A with respect to the bases \({\mathcal {K}}^{\alpha }_m=\{k_{a_1}^{\alpha },\ldots , k_{a_m}^{\alpha } \}\) and \(\widetilde{{\mathcal {K}}}^{\beta }_n=\{{\widetilde{k}}_{b_1}^{\beta },\ldots , {\widetilde{k}}_{b_n}^{\beta } \}\), then \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if

$$\begin{aligned} r_{s,p}=\frac{{\beta '(b_s)}(1-{\overline{a}}_1{b}_s)r_{s,1}+{\beta '(b_1)}({\overline{a}}_1{b}_1-1)r_{1,1}+{\beta '(b_1)}(1-{\overline{a}}_p{b}_1)r_{1,p}}{{\beta '(b_s)}(1-{\overline{a}}_p{b}_s)} \end{aligned}$$
(2.2)

for all \( 1 \le p\le m\) and \(1\le s\le n\).

Proof

The proof is similar to the proofs of [8, Thm. 1.4] and [16, Thm. 3.1].

Assume first that \(A\in {\mathscr {T}}(\alpha ,\beta )\) and let

$$\begin{aligned} A=A_{\overline{\chi }+\psi }^{\alpha , \beta }\text { with }\chi \in K_{\alpha },\ \psi \in K_{\beta } \end{aligned}$$

(this is possible by [15, Cor. 2.6]). Moreover, write

$$\begin{aligned} \chi (z)=\sum _{i=1}^{m}c_i{\widetilde{k}}_{a_i}^{\alpha }(z)=\sum _{i=1}^{m}c_i\frac{\alpha (z)}{z-a_i}\quad \text {and}\quad \psi (z)=\sum _{j=1}^{n}d_j{\widetilde{k}}_{b_j}^{\beta }(z)=\sum _{j=1}^{n}d_j\frac{\beta (z)}{z-b_j}. \end{aligned}$$

Then

$$\begin{aligned} A=A_{\overline{\chi }+\psi }^{\alpha , \beta }=\sum _{i=1}^{m}\overline{c}_iA_{\frac{\overline{\alpha (z)}}{\overline{z}-{\overline{a}}_i}}^{\alpha , \beta }+\sum _{j=1}^{n}d_jA_{\frac{\beta (z)}{z-b_j}}^{\alpha , \beta } \end{aligned}$$

and, since by [15, Prop. 3.1(a)],

$$\begin{aligned} A_{\frac{\overline{\alpha (z)}}{\overline{z}-{\overline{a}}_i}}^{\alpha , \beta }={k}_{a_i}^{\beta }\otimes {\widetilde{k}}_{a_i}^{\alpha }\quad \mathrm {and}\quad A_{\frac{\beta (z)}{z-b_j}}^{\alpha , \beta }={\widetilde{k}}_{b_j}^{\beta }\otimes {k}_{b_j}^{\alpha } \end{aligned}$$

(here \(f\otimes g\) denotes the standard rank-one operator given by \(f\otimes g(h)=\langle h,g\rangle f\)), we get

$$\begin{aligned} {\begin{matrix} A{k}_{a_p}^{\alpha }&{}=\sum _{i=1}^{m}\overline{c}_i\langle {k}_{a_p}^{\alpha },{\widetilde{k}}_{a_i}^{\alpha }\rangle {k}_{a_i}^{\beta }+\sum _{j=1}^{n}d_j\langle {k}_{a_p}^{\alpha },{k}_{b_j}^{\alpha }\rangle {\widetilde{k}}_{b_j}^{\beta }\\ &{}=\overline{c}_p\overline{\alpha '(a_p)}{k}_{a_p}^{\beta }+\sum _{j=1}^{n}\frac{d_j}{1-{\overline{a}}_pb_j}{\widetilde{k}}^{\beta }_{b_j} \end{matrix}} \end{aligned}$$

for each \(1\le p\le m\). Therefore, for each \(1\le p\le m\) and \(1\le s\le n\) we have

$$\begin{aligned} {\begin{matrix} r_{s,p}&{}= \tfrac{1}{\beta '(b_s)}\langle Ak_{a_p}^{\alpha }, {k}_{b_s}^{\beta }\rangle = \tfrac{1}{\beta '(b_s)}\left\langle \overline{c}_p\overline{\alpha '(a_p)}{k}_{a_p}^{\beta }+\sum _{j=1}^{n}\tfrac{d_j}{1-{\overline{a}}_pb_j}{\widetilde{k}}^{\beta }_{b_j},{k}_{b_s}^{\beta }\right\rangle \\ &{}=\tfrac{1}{\beta '(b_s)}\overline{c}_p\overline{\alpha '(a_p)}{k}_{a_p}^{\beta }(b_s)+\tfrac{1}{\beta '(b_s)}\sum _{j=1}^{n}\tfrac{d_j}{1-{\overline{a}}_pb_j}{\widetilde{k}}^{\beta }_{b_j}(b_s)\\ {} &{}=\tfrac{1}{\beta '(b_s)}\overline{c}_p\overline{\alpha '(a_p)}\tfrac{1}{1-{\overline{a}}_p{b}_s} +\tfrac{d_s}{1-{\overline{a}}_p{b}_s}=\tfrac{d_s\beta '(b_s)+\overline{c}_p\overline{\alpha '(a_p)}}{\beta '(b_s)(1-{\overline{a}}_p{b}_s)}. \end{matrix}} \end{aligned}$$

It is now easy to verify that (2.2) holds.

To complete the proof recall that \({\mathscr {T}}(\alpha ,\beta )\), and so the linear space \(V_0\) of all \(n\times m\) matrices representing operators from \({\mathscr {T}}(\alpha ,\beta )\) (with respect to the bases in question), has dimension \(m+n-1\) [16, Prop.2.1]. Since \(V_0\subset V\), where V is the linear space of all \(n\times m\) matrices satisfying (2.2), and clearly the dimension of V is also \(m+n-1\), we have \(V=V_0\). \(\square \)

Remark 2.2

  1. (a)

    By Theorem 2.1 a matrix representing an operator from \({\mathscr {T}}(\alpha ,\beta )\) is completely determined by \(m+n-1\) of its entries: the entries along the first row and the first column. The same reasoning can be used to show that the first row and the first column can be replaced by any other row and any other column, respectively. More precisely, for any fixed \(p_0\in \{1,\ldots ,m\}\) and \(s_0\in \{1,\ldots ,n\}\), condition (2.2) can be replaced by

    $$\begin{aligned} r_{s,p}=\frac{{\beta '(b_s)}(1-{\overline{a}}_{p_0}{b}_s)r_{s,p_0}+{\beta '(b_{s_0})}({\overline{a}}_{p_0}{b}_{s_0}-1)r_{s_0,p_0}+{\beta '(b_{s_0})}(1-{\overline{a}}_p{b}_{s_0})r_{{s_0},p}}{{\beta '(b_s)}(1-{\overline{a}}_p{b}_s)} \end{aligned}$$

    for all \( 1 \le p\le m\) and \(1\le s\le n\).

  2. (b)

    Note that if A is a linear transformation represented by the matrix \({M}_A=(r_{s,p})\), then \(A=A_{\varphi }^{\alpha , \beta }\), where (by the proof of Theorem 2.1) the symbol \(\varphi \) of the form

    $$\begin{aligned} \varphi =\overline{\chi }+\psi =\overline{\sum _{i=1}^{m}c_i{\widetilde{k}}_{a_i}^{\alpha }}+\sum _{j=1}^{n}d_j{\widetilde{k}}_{b_j}^{\beta } \end{aligned}$$

    can be obtained from \(M_A\) using the following system of \(m+n-1\) equations:

    $$\begin{aligned} \left\{ \begin{array}{c@{\quad }c} r_{1,p}=\tfrac{\beta '(b_1)}{\beta '(b_1)(1-{\overline{a}}_p{b}_1)}d_1+\tfrac{\overline{\alpha '(a_p)}}{\beta '(b_1)(1-{\overline{a}}_p{b}_1)}\overline{c}_p,&{}1\le p\le m,\\ r_{s,1}=\tfrac{\beta '(b_s)}{\beta '(b_s)(1-{\overline{a}}_1{b}_s)}d_s+\tfrac{\overline{\alpha '(a_1)}}{\beta '(b_s)(1-{\overline{a}}_1{b}_s)}\overline{c}_1,&{}1<s\le n. \end{array}\right. \end{aligned}$$

Using Theorem 2.1 we can now describe operators from \({\mathscr {T}}(\alpha ,\beta )\) in terms of their matrix representations with respect to \(\widetilde{{\mathcal {K}}}^{\alpha }_m=\{{\widetilde{k}}_{a_1}^{\alpha },\ldots , {\widetilde{k}}_{a_m}^{\alpha } \}\) and \({{\mathcal {K}}}^{\beta }_n=\{{k}_{b_1}^{\beta },\ldots , {k}_{b_n}^{\beta } \}\).

If A is a linear transformation from \(K_{\alpha }\) into \(K_{\beta }\), then its matrix representation \({\widetilde{M}}_A=(t_{s,p})\) with respect to \(\widetilde{{\mathcal {K}}}^{\alpha }_m\) and \({{\mathcal {K}}}^{\beta }_n\) is given by

$$\begin{aligned} t_{s,p}= \tfrac{1}{\overline{\beta '(b_s)}}\langle A{\widetilde{k}}_{a_p}^{\alpha }, {\widetilde{k}}_{b_s}^{\beta }\rangle \end{aligned}$$

for all \( 1 \le p\le m\) and \(1\le s\le n\) (as before, this follows from (2.1)).

Consider \(B=C_{\beta }AC_{\alpha }\). Then

$$\begin{aligned} {t}_{s,p}&= \tfrac{1}{\overline{\beta '(b_s)}}{\langle A{\widetilde{k}}_{a_p}^{\alpha },{\widetilde{k}}_{b_s}^{\beta }\rangle }=\overline{\tfrac{1}{{\beta '(b_s)}}\langle C_{\beta }{k}_{b_s}^{\beta },AC_{\alpha }k_{a_p}^{\alpha }\rangle }\nonumber \\&=\overline{\tfrac{1}{{\beta '(b_s)}}\langle C_{\beta }AC_{\alpha }k_{a_p}^{\alpha },{k}_{b_s}^{\beta }\rangle }={\overline{r}}_{s,p}, \end{aligned}$$
(2.3)

where \(M_B=(r_{s,p})\) is the matrix representation of B with respect to the bases \({{\mathcal {K}}}^{\alpha }_m\) and \(\widetilde{{\mathcal {K}}}^{\beta }_n\). Since \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if \(B\in {\mathscr {T}}(\alpha ,\beta )\) ([14, Prop. 3.2(a), Rem. 3.3(a)]), we get the following characterization.

Theorem 2.3

Let \(\alpha \) be a finite Blaschke product with \(m>0\) distinct zeros \(a_1,\ldots , a_m\), let \(\beta \) be a finite Blaschke product with \(n>0\) distinct zeros \(b_1,\ldots , b_n\) and let A be any linear transformation from \(K_{\alpha }\) into \(K_{\beta }\). If \({\widetilde{M}}_A=(t_{s,p})\) is the matrix representation of A with respect to the bases \(\widetilde{{\mathcal {K}}}^{\alpha }_m=\{{\widetilde{k}}_{a_1}^{\alpha },\ldots , {\widetilde{k}}_{a_m}^{\alpha } \}\) and \({{\mathcal {K}}}^{\beta }_n=\{{k}_{b_1}^{\beta },\ldots , {k}_{b_n}^{\beta } \}\), then \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if

$$\begin{aligned} t_{s,p}=\frac{\overline{\beta '(b_s)}(1-{a}_1\overline{b}_s)t_{s,1}+\overline{\beta '(b_1)}({a}_1\overline{b}_1-1)t_{1,1}+\overline{\beta '(b_1)}(1-{a}_p\overline{b}_1)t_{1,p}}{\overline{\beta '(b_s)}(1-{a}_p\overline{b}_s)} \end{aligned}$$
(2.4)

for all \( 1 \le p\le m\) and \(1\le s\le n\).

Proof

As above, let \({\widetilde{M}}_A=(t_{s,p})\) be the matrix representation of A with respect to \(\widetilde{{\mathcal {K}}}^{\alpha }_m\) and \({{\mathcal {K}}}^{\beta }_n\), and let \(M_B=(r_{s,p})\) be the matrix representation of \(B=C_{\beta }AC_{\alpha }\) with respect to \({{\mathcal {K}}}^{\alpha }_m\) and \(\widetilde{{\mathcal {K}}}^{\beta }_n\). Then \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if \(B\in {\mathscr {T}}(\alpha ,\beta )\), which by Theorem 2.1 happens if and only if \(M_B\) satisfies (2.2). Using (2.3) it is now easy to see that the latter happens if and only if \(M_A\) satisfies (2.4).\(\square \)

Remark 2.4

  1. (a)

    In this case, a matrix representing an operator from \({\mathscr {T}}(\alpha ,\beta )\) is also completely determined by its entries along the first row and the first column. Again, the first row and column can be replaced by any other row and column: for any fixed \(p_0\in \{1,\ldots ,m\}\) and \(s_0\in \{1,\ldots ,n\}\), condition (2.4) can be replaced by

    $$\begin{aligned} t_{s,p}=\frac{\overline{\beta '(b_s)}(1-{a}_{p_0}\overline{b}_s)t_{s,{p_0}}+\overline{\beta '(b_{s_0})}({a}_{p_0}\overline{b}_{s_0}-1)t_{{s_0},{p_0}}+\overline{\beta '(b_{s_0})}(1-{a}_p\overline{b}_{s_0})t_{{s_0},p}}{\overline{\beta '(b_s)}(1-{a}_p\overline{b}_s)} \end{aligned}$$

    for all \( 1 \le p\le m\) and \(1\le s\le n\).

  2. (b)

    If \({\widetilde{M}}_A=(t_{s,p})\) is the matrix representation of A with respect to \(\widetilde{{\mathcal {K}}}^{\alpha }_m\) and \({{\mathcal {K}}}^{\beta }_n\), and \(B={C}_{\beta }A{C}_{\alpha }\), then \({M}_B=(\overline{t}_{s,p})\) is the matrix representation of B with respect to \({{\mathcal {K}}}^{\alpha }_m\) and \(\widetilde{{\mathcal {K}}}^{\beta }_n\) (by (2.3)). If \({\widetilde{M}}_A\) satisfies (2.4), then \({M}_B\) satisfies (2.2) and, as in Remark 2.2(b), we can use \((\overline{t}_{s,p})\) to find \(\varphi \in L^2\) such that \(B=A_{\varphi }^{\alpha ,\beta }\). Then \(\overline{\alpha \varphi }\beta \) is a symbol for A since \(A=C_{\beta }A_{\varphi }^{\alpha ,\beta }C_{\alpha }=A_{\overline{\alpha \varphi }\beta }^{\alpha ,\beta }\) (see [14, Prop. 3.2(a)]).

  3. (c)

    Note that characterizations given in Theorem 2.1 and Theorem 2.3 do not depend on whether \(\alpha \) and \(\beta \) have common zeros or not (in contrast to the characterizations given in [16, Thm. 3.1, Thm. 3.3]).

Remark 2.5

In [18] the authors consider matrix representations of the so-called truncated Hankel operators. These operators may be defined in several ways. The authors of [18] follow the definition proposed by C. Gu [13]: the truncated Hankel operator \(B_{\varphi }^{\alpha }\), \(\varphi \in L^2\), is defined on a dense subset of \(K_{\alpha }\) by

$$\begin{aligned} B_{\varphi }^{\alpha }f=P_{\alpha }J(I-P)(\varphi f),\quad f\in K_{\alpha }^{\infty }, \end{aligned}$$

where \(J:L^2\rightarrow L^2\), \(Jf(z)=\overline{z}f(\overline{z})\) for \(|z|=1\). It is shown in [18, Thm. 3.1] that, for a finite Blaschke product \(\alpha \) with distinct zeros \(a_1,\ldots ,a_n\), a linear transformation B on \(K_{\alpha }\) is a truncated Hankel operator if and only if its matrix representation \({M}_A=(r_{s,p})\) with respect to the kernel basis \({\mathcal {K}}^{\alpha }_n=\{k_{a_1}^{\alpha },\ldots , k_{a_n}^{\alpha }\}\) satisfies

$$\begin{aligned} r_{s,p}=\frac{\overline{\alpha '({a}_s)(1-{a}_s a_1)}r_{s,1}-\overline{\alpha '({a}_1)(1-a_1^2)}r_{1,1}+\overline{\alpha '({a}_p)(1-{a}_1 a_p)}r_{1,p}}{\overline{\alpha '({a}_s)(1-{a}_s a_p)}} \end{aligned}$$
(2.5)

for all \(1\le s,p\le n\).

Recall from [14] that B is a truncated Hankel operator if and only if \(A=J^{\#}BC_{\alpha }\) (here \(J^{\#}:L^2\rightarrow L^2\), \(J^{\#}f(z)=\overline{f(\overline{z})}\)) belongs to \({\mathscr {T}}(\alpha ,\beta )\) with \(\beta =\alpha ^{\#}=J^{\#}\alpha \). Note that if \(\alpha \) is a finite Blaschke product with distinct zeros \(a_1,\ldots ,a_n\), then \(\beta =\alpha ^{\#}\) is a finite Blaschke product with distinct zeros \(b_1,\ldots ,b_n\), where \(b_s={\overline{a}}_s\), \(s=1,\ldots ,n\). So, by Theorem 2.3, \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if its matrix representation \({\widetilde{M}}_A=(t_{s,p})\) with respect to \(\widetilde{{\mathcal {K}}}^{\alpha }_n\) and \({{\mathcal {K}}}^{\beta }_n\) satisfies

$$\begin{aligned} t_{s,p}=\frac{\overline{\beta '({\overline{a}}_s)}(1-{a}_1 a_s)t_{s,1}+\overline{\beta '({\overline{a}}_1)}({a}_1a_1-1)t_{1,1}+\overline{\beta '({\overline{a}}_1)}(1-{a}_p a_1)t_{1,p}}{\overline{\beta '({\overline{a}}_s)}(1-{a}_p a_s)} \end{aligned}$$
(2.6)

for all \( 1 \le p, s\le n\). Since here \(\beta '(b_s)=\beta '({\overline{a}}_s)=\overline{\alpha '(a_s)}\) and

$$\begin{aligned} J^{\#}{\widetilde{k}}_{b_s}^{\beta }=J^{\#}C_{\alpha ^{\#}} {k}_{{\overline{a}}_s}^{\alpha ^{\#}}=C_{\alpha }J^{\#} {k}_{{\overline{a}}_s}^{\alpha ^{\#}}=C_{\alpha }{k}_{{a}_s}^{\alpha }={\widetilde{k}}_{{a}_s}^{\alpha } \end{aligned}$$

(see [6]), we get

$$\begin{aligned} {\begin{matrix} {t}_{s,p}&{}= \tfrac{1}{\overline{\beta '(b_s)}}{\langle A{\widetilde{k}}_{a_p}^{\alpha },{\widetilde{k}}_{b_s}^{\beta }\rangle }= \tfrac{1}{\alpha '(a_s)}{\langle J^{\#}BC_{\alpha }{\widetilde{k}}_{a_p}^{\alpha },{\widetilde{k}}_{b_s}^{\beta }\rangle }\\ &{}= \tfrac{1}{\alpha '(a_s)}\overline{\langle B{k}_{a_p}^{\alpha },J^{\#}{\widetilde{k}}_{b_s}^{\beta }\rangle }=\overline{\tfrac{1}{\overline{\alpha '(a_s)}}\langle B{k}_{a_p}^{\alpha },{\widetilde{k}}_{a_s}^{\alpha }\rangle }={\overline{r}}_{s,p}. \end{matrix}} \end{aligned}$$

It follows that (2.5) is equivalent to (2.6) and [18, Thm. 3.1] can be obtained from Theorem 2.3. Similarly, [18, Thm. 3.2] can be obtained from Theorem 2.1.

3 Kernel Basis and Clark Basis, Conjugate Kernel Basis and Clark Basis

Let \(\alpha \) and \(\beta \) be two finite Blaschke products with (not necessarily distinct) zeros \(a_1,\ldots , a_m\) and \(b_1,\ldots , b_n\), respectively.

For a fixed \(\lambda _1 \in \mathbb {T}\) let \(\eta _1,\ldots ,\eta _m\in \mathbb {T}\) be the distinct solutions of the equation

$$\begin{aligned} \alpha (\eta )=\alpha _{\lambda _1}=\frac{\lambda _1+\alpha (0)}{1+\overline{\alpha (0)}\lambda _1} \end{aligned}$$
(3.1)

and let \({\mathcal {V}}_m^{\alpha }=\{v_{\eta _1}^{\alpha },\ldots ,v_{\eta _m}^{\alpha }\}\), \(v_{\eta _j}^{\alpha }=k_{\eta _j}^{\alpha }/\sqrt{|\alpha '(\eta _j)|}\) for \(j=1,\ldots , m\), be the corresponding Clark basis. Similarly, for a fixed \(\lambda _2 \in \mathbb {T}\) let \(\zeta _1,\ldots ,\zeta _n\in \mathbb {T}\) be the solutions of

$$\begin{aligned} \beta (\zeta )=\beta _{\lambda _2}=\frac{\lambda _2+\beta (0)}{1+\overline{\beta (0)}\lambda _2} \end{aligned}$$

and let \({\mathcal {V}}_n^{\beta }=\{v_{\zeta _1}^{\beta },\ldots ,v_{\zeta _n}^{\beta }\}\), where \(v_{\zeta _j}^{\beta }=k_{\zeta _j}^{\beta }/\sqrt{|\beta '(\zeta _j)|}\) for \(j=1,\ldots , n\).

3.1 \({\mathcal {V}}_m^{\alpha }\) and \({\mathcal {K}}_n^{\beta }\), \({\mathcal {V}}_m^{\alpha }\) and \(\widetilde{{\mathcal {K}}}_n^{\beta }\)

We first assume that the zeros \(b_1,\ldots , b_n\) are distinct and we consider matrix representations with respect to \({\mathcal {V}}_m^{\alpha }\) and \({\mathcal {K}}_n^{\beta }=\{k_{b_1}^{\beta },\ldots ,k_{b_n}^{\beta }\}\).

As in Sect. 2, (2.1) implies that the elements of the matrix representation \(M_A=(r_{s,p})\) of a linear transformation \(A:K_{\alpha }\rightarrow K_{\beta }\) (with respect to \({\mathcal {V}}_m^{\alpha }\) and \({\mathcal {K}}_n^{\beta }\)) are given by

$$\begin{aligned} r_{s,p}= \tfrac{1}{\overline{\beta '(b_s)}}\langle Av_{\eta _p}^{\alpha },{\widetilde{k}}_{b_s}^{\beta }\rangle . \end{aligned}$$
(3.2)

Theorem 3.1

Let \(\alpha \) be a finite Blaschke product with \(m>0\) (not necessarily distinct) zeros, let \({\mathcal {V}}_m^{\alpha }=\{v_{\eta _1}^{\alpha },\ldots ,v_{\eta _m}^{\alpha }\}\) be the Clark basis for \(K_{\alpha }\) corresponding to \(\lambda _1\in \mathbb {T}\) and assume that \(\beta \) is a finite Blaschke product with \(n>0\) distinct zeros \(b_1,\ldots ,b_n\). Moreover, let A be any linear transformation from \(K_{\alpha }\) into \(K_{\beta }\). If \(M_A=(r_{s,p})\) is the matrix representation of A with respect to the bases \({\mathcal {V}}_m^{\alpha }\) and \({\mathcal {K}}_n^{\beta }\), then \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if

$$\begin{aligned} r_{s,p}&=\frac{\sqrt{|\alpha '(\eta _1)|}}{\sqrt{|\alpha '(\eta _p)|}}\frac{\overline{\eta }_1-\overline{b}_s}{\overline{\eta }_p-\overline{b}_s}r_{s,1}+\frac{\sqrt{|\alpha '(\eta _1)|}}{\sqrt{|\alpha '(\eta _p)|}} \frac{\overline{\beta '(b_1)}}{\overline{\beta '(b_s)}}\frac{\overline{b}_1-\overline{\eta }_1}{\overline{\eta }_p-\overline{b}_s}r_{1,1}\nonumber \\&\quad +\frac{\overline{\beta '(b_1)}}{\overline{\beta '(b_s)}}\frac{\overline{\eta }_p-\overline{b}_1}{\overline{\eta }_p-\overline{b}_s}r_{1,p} \end{aligned}$$
(3.3)

for all \( 1 \le p\le m\) and \(1\le s\le n\).

Proof

Assume that \(A\in {\mathscr {T}}(\alpha ,\beta )\) and let \(M_A=(r_{s,p})\) be its matrix representation with respect to \({\mathcal {V}}_m^{\alpha }\) and \({\mathcal {K}}_n^{\beta }\). To show that \(r_{s,p}\) satisfies (3.3), we follow the reasoning used in the proof of [16, Thm. 3.4] (compare [8, Proof of Thm. 1.11]).

Pick \(m+n-1\) distinct points \(\xi _1,\ldots ,\xi _{m+n-1}\) from \(\mathbb {T}\setminus \{\eta _1,\ldots , \eta _m\}\). By [16, Cor. 2.4], the set \(\{k_{\xi _i}^{\beta }\otimes k_{\xi _i}^{\alpha }:i=1,\ldots ,m+n-1\}\) is a basis for \({\mathscr {T}}(\alpha ,\beta )\). Hence,

$$\begin{aligned} A=\sum _{i=1}^{m+n-1}c_i( k_{\xi _i}^{\beta }\otimes k_{\xi _i}^{\alpha }) \end{aligned}$$

for some \(c_1,\ldots ,c_{m+n-1}\in \mathbb {C}\). From this and (3.2) it follows that

$$\begin{aligned} {\begin{matrix} r_{s,p}&{}=\tfrac{1}{\overline{\beta '(b_s)}}\left\langle \sum _{i=1}^{m+n-1}c_i (k_{\xi _i}^{\beta }\otimes k_{\xi _i}^{\alpha })v_{\eta _p}^{\alpha },{\widetilde{k}}_{b_s}^{\beta }\right\rangle \\ &{}=\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{1}{\sqrt{|\alpha '(\eta _p)|}}\left\langle \sum _{i=1}^{m+n-1}c_i \langle k_{\eta _p}^{\alpha }, k_{\xi _i}^{\alpha }\rangle k_{\xi _i}^{\beta },{\widetilde{k}}_{b_s}^{\beta }\right\rangle \\ &{}=\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{1}{\sqrt{|\alpha '(\eta _p)|}}\sum _{i=1}^{m+n-1}c_i k_{\eta _p}^{\alpha }(\xi _i)\overline{{\widetilde{k}}_{b_s}^{\beta }(\xi _i)}\\ &{}=\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{1}{\sqrt{|\alpha '(\eta _p)|}}\sum _{i=1}^{m+n-1}c_i\tfrac{1-\overline{\alpha (\eta _p)}\alpha (\xi _i)}{1-\overline{\eta }_p\xi _i}\,\tfrac{\overline{\beta (\xi _i)}}{\overline{\xi }_i-\overline{b}_s}. \end{matrix}} \end{aligned}$$

Since \(\alpha (\eta _p)=\alpha _{\lambda _1} \) for all \(1\le p\le m\) (see (3.1)), we have

$$\begin{aligned} r_{s,p}=\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{1}{\sqrt{|\alpha '(\eta _p)|}}\sum _{i=1}^{m+n-1}d_i\overline{\xi }_i\tfrac{1}{(\overline{\eta }_p-\overline{\xi }_i)(\overline{b}_s-\overline{\xi }_i)}, \end{aligned}$$

where \(d_i=\left( 1-\overline{\alpha }_{\lambda _1}\alpha (\xi _i)\right) \overline{\beta (\xi _i)}\), \(1\le i\le m+n-1\), are constants which do not depend on p and s. The condition (3.3) can now be verified using the equality

$$\begin{aligned} \frac{\overline{\eta }_p-\overline{b}_s}{(\overline{\eta }_p-\overline{\xi }_i)(\overline{b}_s-\overline{\xi }_i)}=\frac{1}{\overline{b}_s-\overline{\xi }_i}-\frac{1}{\overline{\eta }_p-\overline{\xi }_i}. \end{aligned}$$

Indeed, for \(1\le p\le m\) and \(1\le s\le n\) we have

$$\begin{aligned} r_{s,p}=&\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{1}{\sqrt{|\alpha '(\eta _p)|}}\tfrac{1}{\overline{\eta }_p-\overline{b}_s}\sum _{i=1}^{m+n-1}d_i\overline{\xi }_i\tfrac{\overline{\eta }_p-\overline{b}_s}{(\overline{\eta }_p-\overline{\xi }_i)(\overline{b}_s-\overline{\xi }_i)}\\=&\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{(\overline{\eta }_p-\overline{b}_s)^{-1}}{\sqrt{|\alpha '(\eta _p)|}}\sum _{i=1}^{m+n-1}d_i\overline{\xi }_i \left( \tfrac{\overline{\eta }_p-\overline{b}_1}{(\overline{\eta }_p-\overline{\xi }_i)(\overline{b}_1-\overline{\xi }_i)} +\tfrac{\overline{b}_1-\overline{\eta }_1}{(\overline{b}_1-\overline{\xi }_i)(\overline{\eta }_1-\overline{\xi }_i)}+\tfrac{\overline{\eta }_1-\overline{b}_s}{(\overline{\eta }_1-\overline{\xi }_i)(\overline{b}_s-\overline{\xi }_i)}\right) \\=&\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{1}{\sqrt{|\alpha '(\eta _p)|}}\tfrac{1}{\overline{\eta }_p-\overline{b}_s} \left( \overline{\beta '(b_1)}\sqrt{|\alpha '(\eta _p)|}(\overline{\eta }_p-\overline{b}_1) r_{1,p}\right. \\ {}&+\left. \overline{\beta '(b_1)}\sqrt{|\alpha '(\eta _1)|}(\overline{b}_1-\overline{\eta }_1) r_{1,1}+\overline{\beta '(b_s)}\sqrt{|\alpha '(\eta _1)|}(\overline{\eta }_1-\overline{b}_s) r_{s,1} \right) \\=&\tfrac{\sqrt{|\alpha '(\eta _1)|}}{\sqrt{|\alpha '(\eta _p)|}}\tfrac{\overline{\eta }_1-\overline{b}_s}{\overline{\eta }_p-\overline{b}_s}r_{s,1}+\tfrac{\sqrt{|\alpha '(\eta _1)|}}{\sqrt{|\alpha '(\eta _p)|}} \tfrac{\overline{\beta '(b_1)}}{\overline{\beta '(b_s)}} \tfrac{\overline{b}_1-\overline{\eta }_1}{\overline{\eta }_p-\overline{b}_s}r_{1,1}+\tfrac{\overline{\beta '(b_1)}}{\overline{\beta '(b_s)}}\tfrac{\overline{\eta }_p-\overline{b}_1}{\overline{\eta }_p-\overline{b}_s}r_{1,p}. \end{aligned}$$

The reminder of the proof is analogous to that of Theorem 2.1 (use the dimension argument).\(\square \)

To describe the operators from \({\mathscr {T}}(\alpha ,\beta )\) in terms of their matrix representations with respect to \({\mathcal {V}}_m^{\alpha }\) and \(\widetilde{{\mathcal {K}}}_n^{\beta }=\{{\widetilde{k}}_{b_1}^{\beta },\ldots , {\widetilde{k}}_{b_n}^{\beta } \}\) we again use the fact that \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if \(B=C_{\beta }AC_{\alpha }\in {\mathscr {T}}(\alpha ,\beta )\).

Here, the matrix representation \({\widetilde{M}}_A=(t_{s,p})\) of a linear transformation \(A:K_{\alpha }\rightarrow K_{\beta }\) with respect to the bases \({\mathcal {V}}_m^{\alpha }\) and \(\widetilde{{\mathcal {K}}}_n^{\beta }\) is given by

$$\begin{aligned} t_{s,p}= \tfrac{1}{{\beta '(b_s)}}\langle Av_{\eta _p}^{\alpha },{k}_{b_s}^{\beta } \rangle . \end{aligned}$$

Since

$$\begin{aligned} C_{\alpha }v_{\eta _p}^{\alpha }=\overline{\eta }_p\alpha (\eta _p)v_{\eta _p}^{\alpha }=\overline{\eta }_p\alpha _{\lambda _1}v_{\eta _p}^{\alpha }, \end{aligned}$$

we get that

$$\begin{aligned} {\begin{matrix} t_{s,p}&{}= \tfrac{1}{{\beta '(b_s)}}\langle Av_{\eta _p}^{\alpha },{k}_{b_s}^{\beta } \rangle =\tfrac{1}{{\beta '(b_s)}}\langle C_{\beta }{k}_{b_s}^{\beta },C_{\beta } Av_{\eta _p}^{\alpha } \rangle \\ &{}=\tfrac{1}{{\beta '(b_s)}}\overline{\langle BC_{\alpha } v_{\eta _p}^{\alpha },{\widetilde{k}}_{b_s}^{\beta } \rangle } =\eta _p\overline{\alpha }_{\lambda _1}\overline{\left( \tfrac{1}{\overline{\beta '(b_s)}}\langle B v_{\eta _p}^{\alpha },{\widetilde{k}}_{b_s}^{\beta } \rangle \right) }=\eta _p\overline{\alpha }_{\lambda _1}{\overline{r}}_{s,p}, \end{matrix}} \end{aligned}$$
(3.4)

where \(M_B=(r_{s,p})\) is the matrix representation of B with respect to \({\mathcal {V}}_m^{\alpha }\) and \({{\mathcal {K}}}_n^{\beta }\).

Theorem 3.2

Let \(\alpha \) be a finite Blaschke product with \(m>0\) (not necessarily distinct) zeros, let \({\mathcal {V}}_m^{\alpha }=\{v_{\eta _1}^{\alpha },\ldots ,v_{\eta _m}^{\alpha }\}\) be the Clark basis for \(K_{\alpha }\) corresponding to \(\lambda _1\in \mathbb {T}\) and assume that \(\beta \) is a finite Blaschke product with \(n>0\) distinct zeros \(b_1,\ldots ,b_n\). Moreover, let A be any linear transformation from \(K_{\alpha }\) into \(K_{\beta }\). If \({\widetilde{M}}_A=(t_{s,p})\) is the matrix representation of A with respect to the bases \({\mathcal {V}}_m^{\alpha }\) and \(\widetilde{{\mathcal {K}}}_n^{\beta }\), then \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if

$$\begin{aligned} t_{s,p}= & {} \frac{\sqrt{|\alpha '(\eta _1)|}}{\sqrt{|\alpha '(\eta _p)|}}\frac{1-\overline{\eta }_1{b_s}}{1-\overline{\eta }_p{b_s}}t_{s,1}+\frac{\sqrt{|\alpha '(\eta _1)|}}{\sqrt{|\alpha '(\eta _p)|}} \frac{{\beta '(b_1)}}{{\beta '(b_s)}} \frac{\overline{\eta }_1{b_1}-1}{1-\overline{\eta }_p{b_s}}t_{1,1}\nonumber \\&+\frac{{\beta '(b_1)}}{{\beta '(b_s)}}\frac{1-\overline{\eta }_p{b_1}}{1-\overline{\eta }_p{b_s}}t_{1,p} \end{aligned}$$
(3.5)

for all \( 1 \le p\le m\) and \(1\le s\le n\).

Proof

Let \({\widetilde{M}}_A=(t_{s,p})\) be the matrix representation of A with respect to \({{\mathcal {V}}}^{\alpha }_m\) and \(\widetilde{{\mathcal {K}}}^{\beta }_n\), and let \(M_B=(r_{s,p})\) be the matrix representation of \(B=C_{\beta }AC_{\alpha }\) with respect to \({{\mathcal {V}}}^{\alpha }_m\) and \({{\mathcal {K}}}^{\beta }_n\). As in the proof of Theorem 2.3, \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if \(B\in {\mathscr {T}}(\alpha ,\beta )\). By Theorem 3.1 the latter happens if and only if \(M_B\) satisfies (3.3). It is not difficult to verify using (3.4) that \(M_B\) satisfies (3.3) if and only if \({\widetilde{M}}_A\) satisfies (3.5).\(\square \)

Remark 3.3

  1. (a)

    Here, a matrix representing an asymmetric truncated Toeplitz operator is also determined by its entries along the first row and the first column. Again, the first row and column can be replaced by any other row and column.

  2. (b)

    To determine a symbol for A from \(M_A=(r_{s,p})\) note that by the proof of Theorem 3.1,

    $$\begin{aligned} A=\sum _{i=1}^{m+n-1}c_i( k_{\xi _i}^{\beta }\otimes k_{\xi _i}^{\alpha })=A_{\psi }^{\alpha ,\beta }\quad \text {with}\quad \psi =\sum _{i=1}^{m+n-1}c_i( k_{\xi _i}^{\beta }+\overline{k}_{\xi _i}^{\alpha }-1). \end{aligned}$$

    (see [16, Prop. 2.3(b)]). Now \(c_1,\ldots c_{m+n-1}\) are determined by

    $$\begin{aligned} \left\{ \begin{array}{cl} r_{1,p}=\tfrac{1}{\overline{\beta '(b_1)}} \tfrac{1}{\sqrt{|\alpha '(\eta _p)|}}\displaystyle {\sum _{i=1}^{m+n-1}c_i}\tfrac{1-\overline{\alpha }_{\lambda _1}\alpha (\xi _i)}{1-\overline{\eta }_p\xi _i}\,\tfrac{\overline{\beta (\xi _i)}}{\overline{\xi }_i-\overline{b}_1},&{}1\le p\le m,\\ r_{s,1}=\tfrac{1}{\overline{\beta '(b_s)}} \tfrac{1}{\sqrt{|\alpha '(\eta _1)|}}\displaystyle {\sum _{i=1}^{m+n-1}c_i}\tfrac{1-\overline{\alpha }_{\lambda _1}\alpha (\xi _i)}{1-\overline{\eta }_1\xi _i}\,\tfrac{\overline{\beta (\xi _i)}}{\overline{\xi }_i-\overline{b}_s},&{}1< s\le n.\end{array}\right. \end{aligned}$$

    To determine a symbol for A from \({\widetilde{M}}_A=(t_{s,p})\) use (3.4) and [14, Prop. 3.2(a)] as in Remark 2.4(b).

3.2 \(\widetilde{{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\), \({{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\)

We now assume that the zeros \(a_1,\ldots , a_m\) are distinct and we first consider matrix representations with respect to \(\widetilde{{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\).

Since the Clark basis \({\mathcal {V}}_n^{\beta }\) is orthogonal, the elements of the matrix representation \(M_A=(r_{s,p})\) of a linear transformation \(A:K_{\alpha }\rightarrow K_{\beta }\) with respect to \(\widetilde{{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\) are given by

$$\begin{aligned} r_{s,p}= \langle A{\widetilde{k}}_{a_p}^{\alpha },v_{\zeta _s}^{\beta }\rangle . \end{aligned}$$

Observe that here, for \(1\le p\le m\) and \(1\le s\le n\),

$$\begin{aligned} r_{s,p}= \langle {\widetilde{k}}_{a_p}^{\alpha },A^{*}v_{\zeta _s}^{\beta }\rangle =\alpha '(a_p)\overline{\left( \frac{1}{\overline{\alpha '(a_p)}} \langle A^{*}v_{\zeta _s}^{\beta },{\widetilde{k}}_{a_p}^{\alpha }\rangle \right) }=\alpha '(a_p)\overline{r_{p,s}^{*}}, \end{aligned}$$
(3.6)

where \(M_{A^{*}}=(r_{p,s}^{*})\) is the matrix representation of the adjoint \(A^{*}:K_{\beta }\rightarrow K_{\alpha }\) with respect to \({\mathcal {V}}_n^{\beta }\) and \({{\mathcal {K}}}_m^{\alpha }\) (compare with (3.2)). By Theorem 3.1 (with \(\alpha \) in place of \(\beta \) and \(\beta \) in place of \(\alpha \)), \(A^{*}\in {\mathscr {T}}(\beta ,\alpha )\) if and only if

$$\begin{aligned} r_{p,s}^{*}&=\frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{\overline{\zeta }_1-{\overline{a}}_p}{\overline{\zeta }_s-{\overline{a}}_p}r_{p,1}^{*} +\frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{\overline{\alpha '(a_1)}}{\overline{\alpha '(a_p)}}\frac{{\overline{a}}_1-\overline{\zeta }_1}{\overline{\zeta }_s-{\overline{a}}_p}r_{1,1}^{*}\nonumber \\&\quad +\frac{\overline{\alpha '(a_1)}}{\overline{\alpha '(a_p)}} \frac{\overline{\zeta }_s-{\overline{a}}_1}{\overline{\zeta }_s-{\overline{a}}_p}r_{1,s}^{*} \end{aligned}$$
(3.7)

for all \( 1 \le p\le m\) and \(1\le s\le n\). Since \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if \(A^{*}\in {\mathscr {T}}(\beta ,\alpha )\) [14], the next theorem follows easily from (3.6) and (3.7).

Theorem 3.4

Let \(\alpha \) be a finite Blaschke product with \(m>0\) distinct zeros \(a_1,\ldots , a_m\), let \(\beta \) be a finite Blaschke product with \(n>0\) (not necessarily distinct) zeros and let \({\mathcal {V}}_n^{\beta }=\{v_{\zeta _1}^{\beta },\ldots ,v_{\zeta _n}^{\beta }\}\) be the Clark basis for \(K_{\beta }\) corresponding to \(\lambda _2\in \mathbb {T}\). Moreover, let A be any linear transformation from \(K_{\alpha }\) into \(K_{\beta }\). If \(M_A=(r_{s,p})\) is the matrix representation of A with respect to the bases \(\widetilde{{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\), then \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if

$$\begin{aligned} r_{s,p}=\frac{{a_1}-{\zeta _s}}{{a_p}-{\zeta _s}}r_{s,1}+\frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{{\zeta _1}-{a_1}}{{a_p}-{\zeta _s}}r_{1,1}+\frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{{a_p}-{\zeta _1}}{{a_p}-{\zeta _s}}r_{1,p} \end{aligned}$$
(3.8)

for all \( 1 \le p\le m\) and \(1\le s\le n\).

Similarly, the matrix representation \({\widetilde{M}}_A=(t_{s,p})\) of a linear transformation \(A:K_{\alpha }\rightarrow K_{\beta }\) with respect to \({{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\) is given by

$$\begin{aligned} t_{s,p}= \langle A {k}_{a_p}^{\alpha },v_{\zeta _s}^{\beta }\rangle \end{aligned}$$

and

$$\begin{aligned} t_{s,p}=\overline{\alpha '(a_p) t_{p,s}^{*}}, \end{aligned}$$

where \({\widetilde{M}}_{A^{*}}=(t_{p,s}^{*})\) is the matrix representation of \(A^{*}:K_{\beta }\rightarrow K_{\alpha }\) with respect to \({\mathcal {V}}_n^{\beta }\) and \(\widetilde{{\mathcal {K}}}_m^{\alpha }\). By Theorem 3.2, \(A^{*}\in {\mathscr {T}}(\beta ,\alpha )\) if and only if

$$\begin{aligned} t_{p,s}^{*}=\frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{1-\overline{\zeta }_1{a}_p}{1-\overline{\zeta }_s{a}_p}t_{p,1}^{*} +\frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{{\alpha '(a_1)}}{{\alpha '(a_p)}}\frac{\overline{\zeta }_1{a}_1-1}{1-\overline{\zeta }_s{a}_p}t_{1,1}^{*}+\frac{{\alpha '(a_1)}}{{\alpha '(a_p)}} \frac{1-\overline{\zeta }_s{a}_1}{1-\overline{\zeta }_s{a}_p}t_{1,s}^{*} \end{aligned}$$

for all \(1\le p\le m\), \(1\le s\le n\), and we get the following.

Theorem 3.5

Let \(\alpha \) be a finite Blaschke product with \(m>0\) distinct zeros \(a_1,\ldots , a_m\), let \(\beta \) be a finite Blaschke product with \(n>0\) (not necessarily distinct) zeros and let \({\mathcal {V}}_n^{\beta }=\{v_{\zeta _1}^{\beta },\ldots ,v_{\zeta _n}^{\beta }\}\) be the Clark basis for \(K_{\beta }\) corresponding to \(\lambda _2\in \mathbb {T}\). Moreover, let A be any linear transformation from \(K_{\alpha }\) into \(K_{\beta }\). If \({\widetilde{M}}_A=(t_{s,p})\) is the matrix representation of A with respect to the bases \({{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\), then \(A\in {\mathscr {T}}(\alpha ,\beta )\) if and only if

$$\begin{aligned} t_{s,p}=\frac{1-{\overline{a}}_1\zeta _s}{1-{\overline{a}}_p\zeta _s} t_{s,1}+\frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{{\overline{a}}_1\zeta _1-1}{1-{\overline{a}}_p\zeta _s}t_{1,1}+ \frac{\sqrt{|\beta '(\zeta _1)|}}{\sqrt{|\beta '(\zeta _s)|}}\frac{1-{\overline{a}}_p\zeta _1}{1-{\overline{a}}_p\zeta _s}t_{1,p} \end{aligned}$$
(3.9)

for all \( 1 \le p\le m\) and \(1\le s\le n\).

Remark 3.6

Recall that \((A^{\alpha ,\beta }_{\varphi })^{*}=A^{\beta ,\alpha }_{\overline{\varphi }}\) [14]. To find a symbol for \(A\in {\mathscr {T}}(\alpha ,\beta )\) it is therefore enough to find a symbol for \(A^{*}\in {\mathscr {T}}(\beta ,\alpha )\). Now if the matrix representation \({M}_A=(r_{s,p})\) with respect to \(\widetilde{{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\) satisfies (3.8), then a symbol for \(A^{*}\) can be obtained from \({M}_{A^{*}}=(r_{p,s}^{*})=\left( \frac{1}{\overline{\alpha '(a_p)}}{\overline{r}}_{s,p}\right) \) as in Remark 3.3. If the matrix representation \({\widetilde{M}}_A=(t_{s,p})\) with respect to \({{\mathcal {K}}}_m^{\alpha }\) and \({\mathcal {V}}_n^{\beta }\) satisfies (3.9), then a symbol for \(A^{*}\) can be obtained from \({\widetilde{M}}_{A^{*}}=(t_{p,s}^{*})=\left( \frac{1}{\alpha '(a_p)}\overline{t}_{s,p}\right) \).