1 Introduction

The Jacobi matrix is a tridiagonal matrix defined as

$$\begin{aligned} \begin{pmatrix} b_1 &{} a_1 &{} 0 &{} \dots &{} 0 \\ a_1 &{} b_2 &{} a_2 &{} \ddots &{} \vdots \\ 0 &{} a_2 &{} b_3 &{} \ddots &{} 0 \\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} a_{n-1} \\ 0 &{} \dots &{} 0 &{} a_{n-1} &{} b_n \end{pmatrix} \end{aligned}$$
(1.1)

where \(n \in {\mathbb {N}}\), \(a_k > 0\) for any \(k \in \{1,2,\dots ,n-1\}\) and \(b_k \in {\mathbb {R}}\) for any \(k \in \{1,2,\dots ,n\}\). When \(a_k=1\) for each \(k \in \{1,2,\dots ,n-1\}\), the Jacobi matrix (1.1) defines the finite discrete Schrödinger operator.

Inverse spectral problems aim to recover \(\{a_k\}_{k=1}^{n-1}\) and \(\{b_k\}_{k=1}^{n}\) from spectral information. Ambarzumian-type problems focus on inverse spectral problems for free discrete Schrödinger operators, i.e. \(a_k = 1\) and \(b_k = 0\) for every k, or similar cases when \(b_k = 0\) for some k.

The study of inverse spectral problems of Schrödinger (Sturm-Liouville) equations goes back to Ambarzumian’s work on a finite interval [1], and there are vast and still expanding literature on both continuous (see e.g. [8, 12, 13, 15, 20] and references therein) and discrete (see e.g. [2,3,4,5,6,7,8,9,10,11, 16, 18] and references therein) settings.

In this paper, we first revisit the classical Ambarzumian problem for the finite discrete Schödinger operator in Theorem 3.3, which says that the spectrum of the free operator uniquely determines the operator. Then we provide a counterexample, Example 3.4, which shows that knowledge of the spectrum of the free operator with a nonzero boundary condition is not sufficient for unique recovery. In Theorem 3.5, we observe that a non-zero boundary condition along with the corresponding spectrum of the free operator is needed for the uniqueness result. However, in Theorem 3.6, we prove that for the free operator with Floquet boundary conditions, the set of eigenvalues including multiplicities is sufficient to get the uniqueness up to transpose.

We also answer the following mixed Ambarzumian-type inverse problem positively in Theorem 4.3.

Inverse Spectral Problem Let us consider the discrete Schrödinger matrix S\(_{n,2}\) as \(a_k = 1\) for \(k \in \{1,\dots ,n-1\}\) and \(b_1,b_2 \in {\mathbb {R}}\), \(b_k = 0\) for \(k \in \{3,\dots ,n\}\). Let us also denote the free discrete Schrödinger operator by F\(_n\), which is defined as \(a_k = 1\) for \(k \in \{1,\dots ,n-1\}\) and \(b_k = 0\) for \(k \in \{1,\dots ,n\}\). If S\(_{n,2}\) and F\(_n\) share two consecutive eigenvalues, then do we get \(b_1 = b_2 = 0\), i.e. S\(_{n,2} = \mathbf{F }_n\)?

The paper is organized as follows. In Sect. 2 we fix our notations. In Sect. 3 we consider the problem of unique determination of the finite free discrete Schrödinger operator from its spectrum, with various boundary conditions, namely any real constant boundary condition at zero, and Floquet boundary conditions of any angle. In Sect. 4 we prove the above mentioned Ambarzumian-type mixed inverse spectral problem.

2 Notations

Let us start by fixing our notation. Let S\(_n\) represent the discrete Schrödinger matrix of size \(n \times n\)

$$\begin{aligned} \mathbf{S} _n := \begin{pmatrix} b_{1} &{} 1 &{} {0} &{} \cdots &{} {0} \\ 1 &{} b_{2} &{} 1 &{} \ddots &{} \vdots \\ {0} &{} 1 &{} b_{3} &{} \ddots &{} {0} \\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} 1 \\ {0} &{} \cdots &{} {0} &{} 1 &{} b_{n} \\ \end{pmatrix},\\ \end{aligned}$$

where \(b_k\in {\mathbb {R}}\). Let S\(_n(b,B)\) denote the discrete Schrödinger matrix S\(_n\) satisfying \(b_1=b\) and \(b_n=B\). Let us also introduce the following matrices:

$$\begin{aligned} \mathbf{S }_{n}(\theta ) := \begin{pmatrix} b_{1} &{} 1 &{} 0 &{} \ldots &{} {e^{2\pi i\theta }}\\ 1 &{}b_{2} &{} 1 &{} \ddots &{} \vdots \\ 0 &{} 1 &{} b_{3} &{} \ddots &{} 0\\ \vdots &{}\ddots &{}\ddots &{} \ddots &{} 1\\ {e^{-2\pi i\theta }} &{}\ldots &{} 0 &{} 1 &{} b_n\\ \end{pmatrix} \quad \text {and} \quad \mathbf{S }_{n,m} := \begin{pmatrix} b_{1} &{} 1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 1 &{}\ddots &{} 1 &{} 0 &{} \ddots &{} 0 \\ 0 &{} 1 &{} b_{m} &{} 1 &{} \ddots &{} \vdots \\ 0 &{} 0 &{} 1 &{} 0 &{} \ddots &{} 0 \\ \vdots &{}\ddots &{}\ddots &{} \ddots &{} \ddots &{} 1\\ 0 &{}\ldots &{} \ldots &{} 0 &{} 1 &{} 0\\ \end{pmatrix}.\\ \end{aligned}$$

Let us denote the free discrete Schrödinger matrix of size \(n \times n\) by \(\mathbf{F} _n\):

$$\begin{aligned} \mathbf{F} _n := \begin{pmatrix} 0 &{} 1 &{} 0 &{} \cdots &{} 0\\ 1 &{} 0 &{} 1 &{} \ddots &{} \vdots \\ 0 &{} 1 &{} 0 &{} \ddots &{} 0\\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} 1\\ 0 &{} \cdots &{} 0 &{} 1 &{} 0\\ \end{pmatrix}, \end{aligned}$$

so F\(_n(b,B)\) and F\(_n(\theta )\) are defined accordingly.

In addition, let \(p_n(x)\) be the characteristic polynomial of \(\mathbf{F} _n\) with zeroes \(\lambda _1,\dots ,\lambda _n\) and let \(q_n(x)\) be the characteristic polynomial of \(\mathbf{S} _n\) with zeroes \(\mu _1,\dots ,\mu _n\). The discrete Schrödinger matrix S\(_n\) has n distinct simple eigenvalues, so we can order the eigenvalues as \(\lambda _1< \lambda _2< \cdots < \lambda _n\) and \(\mu _1< \mu _2< \cdots < \mu _n\). This property is valid for any Jacobi matrix and can be found in [19], which provides an extensive study of Jacobi operators.

Throughout the paper we will discuss our results via matrices using the notations introduced above, but let us note that the matrices S\(_n(b,B)\) and S\(_n(\theta )\) can be represented as difference equations.

Remark 2.1

Given S\(_n\), let us consider the discrete Schrödinger matrix where all \(b_k\)’s are the same as in S\(_n\) except \(b_1\) and \(b_n\) are replaced by \(b_1+b\) and \(b_n+B\) respectively for \(b,B \in {\mathbb {R}}\), i.e.

$$\begin{aligned} \mathbf{S} _n + b(\delta _1,\cdot )\delta _1+B(\delta _n,\cdot )\delta _n. \end{aligned}$$
(2.1)

The discrete Schrödinger matrix (2.1) is given by the difference expression

$$\begin{aligned} f_{k-1} + b_kf_k + f_{k+1}, \quad \quad k \in \{1,\cdots ,n\} \end{aligned}$$
(2.2)

with the boundary conditions

$$\begin{aligned} f_0 = bf_1 \qquad \text {and} \qquad f_{n+1} = Bf_n. \end{aligned}$$

In order to get a unique difference expression with these boundary conditions for a given discrete Schrödinger matrix, we can think of the first and the last diagonal entries of the matrix S\(_n\) as the boundary conditions at 0 and \(n+1\) respectively. Therefore S\(_n(b,B)\) denotes the discrete Schrödinger matrix with boundary conditions b at 0, B at \(n+1\).

If we consider the difference expression (2.2) with the Floquet boundary conditions

$$\begin{aligned} f_0 = f_ne^{2\pi i\theta } \qquad \text {and} \qquad f_{n+1}=f_1e^{-2\pi i\theta }, \qquad \theta \in [0,\pi ) \end{aligned}$$

then we get the matrix representation S\(_n(\theta )\).

3 Ambarzumian problem with various boundary conditions

Firstly let us obtain the first three leading coefficients of \(q_n(x)\). This is a well-known result, but we give a proof in order to make this section self-contained.

Lemma 3.1

The characteristic polynomial \(q_n(x)\) of the discrete Schrödinger matrix S\(_n\) has the form

$$\begin{aligned} q_n(x)=x^n-\left( \sum _{i=1}^n b_i\right) x^{n-1}+\left( \sum _{\begin{array}{c} 1\le i<j\le n \end{array}}b_ib_j-(n-1)\right) x^{n-2} + Q_{n-2}(x), \end{aligned}$$

where \(Q_{n-2}(x)\) is a polynomial of degree at most \(n-2\).

Proof

The characteristic polynomial \(q_n(x)\) is given by \(\det (x\mathbf{I} _n - \mathbf{S} _n)\). Let us consider Liebniz’ formula for the determinants

$$\begin{aligned} \det (\mathbf{A} )=\sum _{\sigma \in {\mathbb {S}}_n}\text {sgn}(\sigma )\prod _{i=1}^n\alpha _{i,\sigma (i)}, \end{aligned}$$
(3.1)

where \(\mathbf{A} = [\alpha _{i,j}]\) is an \(n \times n\) matrix and sgn is the sign function of permutations in the permutation group \({\mathbb {S}}_n\). If we use (3.1) with the identity permutation, then \(\det (x\mathbf{I} _n - \mathbf{S} _n)\) becomes \(\prod _{i=1}^n (x-b_i)\), so we get that \(q_n(x)\) is a monic polynomial and the coefficient of \(x^{n-1}\) is \(-{\text {tr}}(\mathbf{S} _n)\).

The coefficient of the \(x^{n-2}\) term of \(q_n(x)\) is formed from the sum of all disjoint pairs of \(a_i\). Hence we obtain \(\sum _{1 \le i < j\le n}b_ib_j\).

The only other permutation that will yield an \(x^{n-2}\) term is a transposition. However, if \(|i-j|\ge 1\), then the product will be zero. Thus we are looking for transpositions where \(|i-j|=1\). There are \(n-1\) of these, namely \((1,2),(2,3),\dots ,(n-2,n-1),(n-1,n)\). The product is of the form

$$\begin{aligned} \prod _{i=1}^n \alpha _{i, \sigma (i)} = \frac{(-1)(-1)(x-b_1)(x-b_2)\cdots (x-b_n)}{(x-b_i)(x-b_j)} = x^{n-2} + r_{n-3}(x), \end{aligned}$$

where \(r_{n-3}(x)\) is a polynomial of degree at most \(n-3\). Since the signature of a transposition is negative, we derive \(-x^{n-2}\) for each product. Summing over all \(n-1\) permutations and adding to \(\sum _{1\le i < j\le n}b_ib_j\) yields our desired result. \(\square \)

Corollary 3.2

The characteristic polynomial \(p_n(x)\) of the free discrete Schrödinger matrix F\(_n\) has the form

$$\begin{aligned} p_n(x)=x^n-(n-1)x^{n-2} + P_{n-2}(x), \end{aligned}$$

where \(P_{n-2}(x)\) is a polynomial of degree at most \(n-2\).

Proof

Simply set \(b_i = 0\) for each \(i \in \{1,\cdots ,n\}\) and apply Lemma 3.1. \(\square \)

Now let us give a proof of Ambarzumian problem with Dirichlet-Dirichlet boundary conditions, i.e. for the matrix F\(_n(0,0)\) = F\(_n\) in our notation.

Theorem 3.3

Suppose S\(_n\) shares all of its eigenvalues with F\(_n\). Then S\(_n=~\) F\(_n\).

Proof

In order for the two matrices to have all the same eigenvalues, they must have equal characteristic polynomials. Comparing the results from Lemma 3.1 to Corollary 3.2, we must have

$$\begin{aligned} \sum _{i=1}^{n}b_i=0\qquad \text {and} \qquad \sum _{1\le i < j\le n}b_ib_j=0 \end{aligned}$$
(3.2)

This leads us to conclude that

$$\begin{aligned} \sum _{i=1}^{n} b_i^2 = \left( \sum _{i=1}^{n} b_i\right) ^2-2\left( \sum _{1\le i < j\le n}b_ib_j\right) =0 \end{aligned}$$

which only occurs when all the \(b_i\) are zero, i.e. \(\mathbf{S} _n=\mathbf{F} _n\), since \(b_i \in {\mathbb {R}}\) for each \(i \in \{1,\cdots ,n\}\). \(\square \)

A natural question to ask is whether or not we get the uniqueness of the free operator with non-zero boundary conditions. After Theorem 3.3, one may expect to get the uniqueness of a free discrete Schrödinger operator from a spectrum with non-zero boundary condition at 0. However, this is not the case because of the following counterexample:

Example 3.4

Let us define the discrete Schrödinger matrices

$$\begin{aligned} A {:=} \begin{pmatrix} 2 &{} 1 &{} 0\\ 1 &{} 0 &{} 1\\ 0 &{} 1 &{} 0 \end{pmatrix} \text { and } B {:=} \begin{pmatrix} -2/(1+\sqrt{5}) &{} 1 &{} 0\\ 1 &{} 1 &{} 1\\ 0 &{} 1 &{} (1+\sqrt{5})/2\\ \end{pmatrix}. \end{aligned}$$

The matrices A and B have the same characteristic polynomial \(x^3-2x^2-2x+2\), so they share the same spectrum.

This example shows that Theorem 3.3 is a special case, so in order to get the uniqueness of a rank-one perturbation of the free operator, we also need to know the non-zero boundary condition along with the spectrum.

Theorem 3.5

Suppose S\(_n(b,b_n)\) shares all of its eigenvalues with F\(_n(b,0)\). Then S\(_n(b,b_n)=~\) F\(_n(b,0)\).

Proof

Comparing coefficients of characteristic polynomials of S\(_n(b,b_n)\) and F\(_n(b,0)\) like we did in the proof of Theorem 3.3, we get

$$\begin{aligned} b+\sum _{i=2}^{n}b_i=b\qquad \text {and} \qquad b\sum _{j=2}^n b_j + \sum _{2\le i < j\le n}b_ib_j=0 \end{aligned}$$
(3.3)

The first equation of (3.3) gives \(\sum _{i=2}^{n}b_i=0\) and using this in the second equation of (3.3) we get \(\sum _{2\le i < j\le n}b_ib_j = 0\). This leads us to conclude that

$$\begin{aligned} \sum _{i=2}^{n} b_i^2 = \left( \sum _{i=2}^{n} b_i\right) ^2-2\left( \sum _{2\le i < j\le n}b_ib_j\right) =0 \end{aligned}$$

which only occurs when \(b_i = 0\) for each \(i \in \{2,\cdots ,n\}\), i.e. S\(_n(b,b_n)\) = F\(_n(b,0)\). \(\square \)

Now, we approach the Ambarzumian problem with Floquet boundary conditions. Let us recall that S\(_n(\theta )\) and F\(_n(\phi )\) denote a discrete Schrödinger operator and the free discrete Schrödinger operator with Floquet boundary conditions for the angles \(0 \le \theta < 1\) and \(0 \le \phi < 1\), respectively.

The following theorem shows that with Floquet boundary conditions, the knowledge of the spectrum of the free operator is sufficient for the uniqueness up to transpose.

Theorem 3.6

Suppose that S\(_n(\theta )\) shares all of its eigenvalues with F\(_n(\phi )\), including multiplicity, for \(0 \le \theta ,\phi < 1\). Then \(b_1=\dots =b_n=0\) and \(\theta \in \{\phi , 1-\phi \}\), i.e. S\(_n(\theta )\) = F\(_n(\phi )\) or F\(_n^{ {\mathsf {T}}}(\phi )\)

Proof

Let us define D[kl] as the following determinant of a \((l-k+1) \times (l-k+1)\) matrix for \(1\le k < l \le n\):

$$\begin{aligned} D[k,l]{:=}\begin{vmatrix} x-b_k&-1&0&\cdots&0\\ -1&x-b_{k+1}&\ddots&\ddots&\vdots \\ 0&\ddots&\ddots&\ddots&0\\ \vdots&\ddots&\ddots&x-b_{l-1}&-1\\ 0&\cdots&0&-1&x-b_l\\ \end{vmatrix}_{(l-k+1)}\\ \end{aligned}$$

Let us consider the characteristic polynomial of S\(_n(\theta )\) by using cofactor expansion on the first row:

$$\begin{aligned} \begin{aligned} |x\mathbf{I} _n-\mathbf{S} _n(\theta )|&=(x-b_1)D[2,n]+\begin{vmatrix} -1&-1&0&\cdots&0\\ -1&x-b_{3}&-1&\ddots&\vdots \\ 0&-1&\ddots&\ddots&0\\ \vdots&\ddots&\ddots&x-b_{n-1}&-1\\ -e^{-2\pi i \theta }&0&\cdots&-1&x-b_n\\ \end{vmatrix}_{(n-1)}\\&\quad +(-1)^{n+1}\left( -e^{2\pi i \theta }\right) \begin{vmatrix} -1&x-b_{2}&-1&0&\cdots \\ 0&-1&x-b_{3}&\ddots&\ddots \\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\cdots&0&-1&x-b_{n-1}\\ -e^{-2\pi i \theta }&0&\cdots&0&-1\\ \end{vmatrix}_{(n-1)} \end{aligned}\\ \end{aligned}$$

Then by using cofactor expansions on the first row of the determinant in the second term and on the first column of the determinant in the third term we get

$$\begin{aligned} \begin{aligned} |x\mathbf{I} _n-\mathbf{S} _n(\theta )|&=(x-b_1)D[2,n]+(-1)D[3,n]+\begin{vmatrix} 0&-1&0&\cdots&0\\ 0&x-b_{4}&-1&\ddots&\vdots \\ 0&-1&\ddots&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ -e^{-2\pi i \theta }&0&\cdots&-1&x-b_{n}\\ \end{vmatrix}_{(n-2)}\\&\quad +(-1)^{n+1}\left( -e^{2\pi i \theta }\right) (-1) \begin{vmatrix} -1&x-b_{3}&-1&0&\cdots \\ 0&-1&x-b_{4}&\ddots&\ddots \\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\cdots&\ddots&-1&x-b_{n-1}\\ 0&\cdots&\cdots&0&-1\\ \end{vmatrix}_{(n-2)}\\&\quad +(-1)^{n+1}\left( -e^{2\pi i \theta }\right) (-1)^n\left( -e^{-2\pi i \theta }\right) D[2,n-1] \end{aligned}\\ \end{aligned}$$

Now let’s use cofactor expansion on the first column of the determinant in the third term. We see that the determinant in the fourth term is the determinant of an upper triangular matrix. Therefore,

$$\begin{aligned} \begin{aligned} |x\mathbf{I} _n-\mathbf{S} _n(\theta )|&=(x-b_1)D[2,n]-D[3,n]\\&\quad +(-1)^{n-1}\left( -e^{2\pi i \theta }\right) \begin{vmatrix} -1&0&0&\cdots&0 \\ x-b_4&-1&\ddots&\ddots&\vdots \\ -1&x-b_5&\ddots&\ddots&0\\ 0&\ddots&\ddots&-1&0\\ -e^{-2\pi i \theta }&0&-1&x-b_{n-1}&-1\\ \end{vmatrix}_{(n-3)}\\&\quad +(-1)^{n+1}\left( -e^{2\pi i \theta }\right) \left[ (-1)(-1)^{n-2}+(-1)^n\left( -e^{-2\pi i \theta }\right) D[2,n-1]\right] \end{aligned}\\ \end{aligned}$$

Finally, observing that the determinant in the third term is that of a lower triangular matrix, we get

$$\begin{aligned} \begin{aligned} |x\mathbf{I} _n-\mathbf{S} _n(\theta )|&=(x-b_1)D[2,n]-D[3,n] +(-1)^{n-1}\left( -e^{2\pi i \theta }\right) (-1)^{n-3}\\&\quad +(-1)^{2n}\left( -e^{-2\pi i \theta }\right) +(-1)^{2n+1}D[2,n-1]\\&\quad =(x-b_1)D[2,n]-D[3,n]-D[2,n-1] -e^{2\pi i \theta }-e^{-2\pi i \theta } \end{aligned} \end{aligned}$$
(3.4)

At this point note that D[kl] is the characteristic polynomial of the following discrete Schrödinger matrix

$$\begin{aligned} \begin{pmatrix} b_{k} &{} 1 &{} {0} &{} \cdots &{} {0} \\ 1 &{} b_{k+1} &{} 1 &{} \ddots &{} \vdots \\ {0} &{} 1 &{} \ddots &{} \ddots &{} {0} \\ \vdots &{} \ddots &{} \ddots &{} b_{l-1} &{} 1 \\ {0} &{} \cdots &{} {0} &{} 1 &{} b_{l} \\ \end{pmatrix}.\\ \end{aligned}$$

Therefore using Lemma 3.1 and Eq. (3.4), we obtain

$$\begin{aligned} \begin{aligned} |x\mathbf{I} _n-\mathbf{S} _n(\theta )|&=x^n-\left( \sum _{i=1}^n b_i\right) x^{n-1}+ \left( \sum _{\begin{array}{c} 1\le i<j\le n \end{array}}b_ib_j-(n-1)\right) x^{n-2}\\&\quad +f_{n-3}(x)-e^{2\pi i \theta }-e^{-2\pi i \theta } \end{aligned} \end{aligned}$$
(3.5)

where \(f_{n-3}\) is a polynomial of degree at most \(n-3\), independent of \(\theta \). Using the same steps for \(\mathbf{F} _n(\phi )\), we obtain

$$\begin{aligned} \begin{aligned} |x\mathbf{I} _n-\mathbf{F} _n(\phi )|&=x^n-(n-1)x^{n-2}+g_{n-3}(x)-e^{2\pi i \phi }-e^{-2\pi i \phi } \end{aligned} \end{aligned}$$
(3.6)

where \(g_{n-3}\) is a polynomial of degree at most \(n-3\), which is independent of \(\phi \).

Comparing Eqs. (3.5) and (3.6), like we did in the Proof of Theorem 3.3, we can conclude that the diagonal entries \(\{b_i\}_{i=1}^n\) of S\(_n(\theta )\) must be zero.

Note that the expression consisting of the first three terms in the right end of (3.4), \((x-b_1)D[2,n]-D[3,n]-D[2,n-1]\) is independent of \(\theta \). In addition, we observed that \(b_1=\dots =b_n=0\). Therefore using the equivalence of the characteristic polynomials of S\(_n(\theta )\) and F\(_n(\phi )\), we obtain

$$\begin{aligned} e^{2\pi i \theta }+e^{-2\pi i \theta } = e^{2\pi i \phi }+e^{-2\pi i \phi }, \end{aligned}$$

which can be written using Euler’s identity as

$$\begin{aligned} 2\cos (2\pi \theta ) = 2\cos (2\pi \phi ). \end{aligned}$$
(3.7)

Equation (3.7) is valid if and only if \(\theta \) differs from \(\phi \) or \(-\phi \) by an integer. Since \(0 \le \theta ,\phi < 1\), the only possible values for \(\theta \) are \(\phi \) and \(1-\phi \). This completes the proof. \(\square \)

4 An Ambarzumian-type mixed inverse spectral problem

Let us recall that \(\mathbf{S }_{n,m}\) denotes the following \(n \times n\) discrete Schrödinger matrix for \(1 \le m \le n\):

$$\begin{aligned} \mathbf{S }_{n,m} {:=} \begin{pmatrix} b_{1} &{} 1 &{} 0 &{} 0 &{} \ldots &{} 0 \\ 1 &{}\ddots &{} 1 &{} 0 &{} \ddots &{} 0 \\ 0 &{} 1 &{} b_{m} &{} 1 &{} \ddots &{} \vdots \\ 0 &{} 0 &{} 1 &{} 0 &{} \ddots &{} 0 \\ \vdots &{}\ddots &{}\ddots &{} \ddots &{} \ddots &{} 1\\ 0 &{}\ldots &{} \ldots &{} 0 &{} 1 &{} 0\\ \end{pmatrix}\\ \end{aligned}$$

and F\(_{n}\) denotes the free discrete Schrödinger matrix of size \(n \times n\). In this section our goal is to answer the following Ambarzumian-type mixed spectral problem positively for the \(m=2\) case.

Inverse Spectral Problem If S\(_{n,m}\) and F\(_{n}\) share m consecutive eigenvalues, then do we get \(b_1 = \cdots = b_m = 0\), i.e. S\(_{n,m} = \mathbf{F }_n\)?

When \(m=1\), this problem becomes a special case of the following result of Gesztesy and Simon [10]. For a Jacobi matrix given as (1.1), let us consider the sequences \(\{a_k\}\) and \(\{b_k\}\) as a single sequence \(b_1,a_1,b_2,a_2,\cdots ,a_{n-1},b_n = c_1,c_2,\cdots ,c_{2n-1}\), i.e. \(c_{2k-1} {:=} b_k\) and \(c_{2k} {:=} a_k\).

Theorem 4.1

([10], Theorem 4.2) Suppose that \(1 \le k \le n\) and \(c_{k+1}, \cdots ,c_{2n-1}\) are known, as well as k of the eigenvalues. Then \(c_1,\cdots ,c_k\) are uniquely determined.

Proposition 4.2

Let \(\lambda _{k} = {\tilde{\lambda }}_{k}\) for some \(k \in \{1,2,\dots ,n-1\}\). Then \(b_1 = 0\), i.e. S\(_{n,1} = \mathbf{F }_n\).

Proof

Following the notations of Theorem 4.1, \(c_{2}, \cdots ,c_{2n-1}\) are known for S\(_{n,1}\). Hence by letting \(k=1\) in Theorem 4.1, we get S\(_{n,1} = \mathbf{F }_n\). \(\square \)

Now let us prove the \(m=2\) case. Let \(\lambda _{1}<\lambda _{2}<\dots <\lambda _{n}\) denote the eigenvalues of F\(_{n}\), and let \({\tilde{\lambda }}_1<{\tilde{\lambda }}_2<\dots <{\tilde{\lambda }}_n\) denote the eigenvalues of S\(_{n,2}\).

Theorem 4.3

Let \(\lambda _{k} = {\tilde{\lambda }}_{k}\) and \(\lambda _{k+1} = {\tilde{\lambda }}_{k+1}\) for some \(k \in \{1,2,\dots ,n-1\}\). Then \(b_1 = 0\) and \(b_2 = 0\), i.e. S\(_{n,2} = \mathbf{F }_n\).

Proof

We start by proving the following claim.

Claim: If \(\lambda _k = {\tilde{\lambda }}_{k}\) and \(\lambda _{k+1} = {\tilde{\lambda }}_{k+1}\) , then either \(b_{1} = b_{2} = 0\) or \(b_{1} = \lambda _{k}+\lambda _{k+1}\) and \(b_{2} = 1/\lambda _{k}+1/\lambda _{k+1}\).

Let us consider the characteristic polynomial of S\(_{n,2}\) using cofactor expansion on the first row of \(\lambda I-\mathbf{S }_{n,2}\).

$$\begin{aligned}&\det (\lambda I-\mathbf{S }_{n,2}) = \\&\quad (\lambda - b_{1}) \begin{vmatrix} \lambda -b_{2}&-1&0&\ldots&0\\ -1&\lambda&-1&\ddots&\vdots \\ 0&-1&\lambda&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\ldots&0&-1&\lambda \\ \end{vmatrix}_{(n-1)}+\quad \begin{vmatrix} -1&-1&0&\ldots&0\\ -1&\lambda&-1&\ddots&\vdots \\ 0&-1&\lambda&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\ldots&0&-1&\lambda \\ \end{vmatrix}_{(n-1)} \end{aligned}$$

Using cofactor expansion on the first row for the first term and the first column for the second term, we get

$$\begin{aligned} \det (\lambda I-\mathbf{S }_{n,2})&= (\lambda - b_{1})(\lambda - b_{2}) \det (\lambda I-\mathbf{F }_{n-2})\\&\quad +(\lambda -b_{1}) \begin{vmatrix} -1&-1&0&\ldots&0\\ 0&\lambda&-1&\ddots&\vdots \\ 0&-1&\lambda&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\ldots&0&-1&\lambda \\ \end{vmatrix}_{(n-2)} - \det (\lambda I-\mathbf{F }_{n-2}) \end{aligned}$$

Finally, using cofactor expansion on the first column of the second term, we get

$$\begin{aligned}&\det (\lambda I-\mathbf{S }_{n,2}) = [(\lambda - b_{1})(\lambda - b_{2})-1] \det (\lambda I-\mathbf{F }_{n-2})\nonumber \\&\quad -(\lambda -b_{1})\det (\lambda I-\mathbf{F }_{n-3}). \end{aligned}$$
(4.1)

Note that the cofactor expansions we derived so far are given in [10] in a more general setting for Jacobi matrices. One just needs to look at (2.4) in [10] and keep in mind that (2.5) in [10] takes place.

Since \({\tilde{\lambda }}_{k}=\lambda _{k}\) and \({\tilde{\lambda }}_{k+1}=\lambda _{k+1}\), right hand side of (4.1) is zero when \(\lambda = \lambda _{k}\) or \(\lambda = \lambda _{k+1}\). Therefore for \(\lambda = \lambda _{k}\) or \(\lambda = \lambda _{k+1}\) we get

$$\begin{aligned} \frac{(\lambda -b_{1})(\lambda -b_{2})-1}{\lambda - b_{1}} = \frac{\det (\lambda I-\mathbf{F }_{n-3})}{\det (\lambda I-\mathbf{F }_{n-2})}. \end{aligned}$$
(4.2)

Equation (4.2) is also valid for \(\mathbf{F }_{n}\), i.e. when \(b_1=b_2=0\), and the right hand side of the equation does not depend on \(b_{1}\) or \(b_{2}\) and hence identical for \(\mathbf{S }_{n,2}\) and \(\mathbf{F }_{n}\). Therefore the left hand side of (4.2) should also be identical for \(\mathbf{S }_{n,2}\) and \(\mathbf{F }_{n}\), when \({\tilde{\lambda }}_{k}=\lambda _{k}\) and \({\tilde{\lambda }}_{k+1}=\lambda _{k+1}\). Hence,

$$\begin{aligned} \frac{(\lambda -b_{1})(\lambda -b_{2})-1}{\lambda - b_{1}} = \frac{(\lambda -0)(\lambda -0)-1}{\lambda - 0} \end{aligned}$$
(4.3)

for \(\lambda = \lambda _{k}\) or \(\lambda = \lambda _{k+1}\). Therefore,

$$\begin{aligned} \lambda (\lambda -b_{1})(\lambda -b_{2}) - \lambda&= (\lambda ^{2}-1)(\lambda -b_{1})\\ \lambda ^{3}-(b_{1}+b_{2})\lambda ^{2}+b_{1}b_{2}\lambda - \lambda&= \lambda ^{3} -b_{1}\lambda ^{2} - \lambda + b_{1}\\ -b_{2}\lambda ^{2}+b_{1}b_{2}\lambda - b_{1}&= 0 \end{aligned}$$

for \(\lambda = \lambda _{1}\) or \(\lambda = \lambda _{2}\). If \(b_{2} = 0\), then \(b_{1} = 0\) from the last equation above, so we can assume \(b_{2} \ne 0\). Then \(\lambda ^{2}-b_{1}\lambda + b_{1}/b_{2} = 0\) for \(\lambda = \lambda _{k}\) or \(\lambda = \lambda _{k+1}\).

Since \(x^{2}-b_{1}x+b_{1}/b_{2}\) is a monic polynomial with two distinct roots \(x=\lambda _{k}\) and \(x=\lambda _{k+1}\), we get

$$\begin{aligned} x^{2}-b_{1}x+b_{1}/b_{2} = (x-\lambda _{k})(x-\lambda _{k+1}) \end{aligned}$$

which implies

$$\begin{aligned} x^{2}-b_{1}x+b_{1}/b_{2} = x-(\lambda _{k}+\lambda _{k+1})x + \lambda _{k}\lambda _{k+1} \end{aligned}$$

Comparing coefficients we get our claim, since \(b_{1} = \lambda _{k}+\lambda _{k+1}\), and \(b_{1}/b_{2} = \lambda _{k}\lambda _{k+1}\) implies

$$\begin{aligned} b_2 = \frac{b_{1}}{\lambda _{k}\lambda _{k+1}} = \frac{\lambda _{k}+ \lambda _{k+1}}{\lambda _{k}\lambda _{k+1}} = \frac{1}{\lambda _{k}}+\frac{1}{\lambda _{k+1}} \end{aligned}$$

Now our goal is to get a contradiction for the second case of the claim, i.e. when \(b_{1} = \lambda _{k}+\lambda _{k+1}\) and \(b_2 = 1/\lambda _{k}+1/\lambda _{k+1}\), so let us assume

$$\begin{aligned} b_{1} = \lambda _{k}+\lambda _{k+1} \quad \text {and} \quad b_2 = 1/\lambda _{k}+1/\lambda _{k+1}. \end{aligned}$$

First let us show that \(b_1\) and \(b_2\) have the same sign. If n is even and \(k = n/2\), then \(\lambda _k = -\lambda _{k+1}\). Hence \(b_1 = b_2 = 0\). If n is odd and \(k = (n-1)/2\) or \(k = (n+1)/2\), then one of the eigenvalues \(\lambda _k\) or \(\lambda _{k+1}\) is zero, so \(b_2\) is undefined. For all other values of k, two consecutive eigenvalues \(\lambda _k\) and \(\lambda _{k+1}\) and hence \(b_1\) and \(b_2\) have the same sign.

Without loss of generality, let us assume both \(\lambda _k\) and \(\lambda _{k+1}\) are negative and \(b_1 \le b_2\). Let us define the matrix \(\mathbf{M }_{n}(t)\) with the real parameter t as follows:

$$\begin{aligned} \mathbf{M }_{n}(t) {:=} \begin{pmatrix} -t &{} 1 &{} 0 &{} \ldots &{} 0\\ 1 &{} -t &{} 1 &{} \ddots &{} \vdots \\ 0 &{} 1 &{} 0 &{} \ddots &{} 0\\ \vdots &{}\ddots &{}\ddots &{} \ddots &{} 1\\ 0 &{}\ldots &{} 0 &{} 1 &{} 0\\ \end{pmatrix}.\\ \end{aligned}$$

Note that the kth eigenvalue of \(\mathbf{M }_n(-b_2)\) is greater than or equal to \({\tilde{\lambda }}_k\), since \(\mathbf{M }_n(-b_2) \ge \mathbf{S }_{n,2}\). Let us also note that \(\mathbf{M }_n(0) = \mathbf{F }_n\). Let us denote the kth eigenvalue of \(\mathbf{M }_n(t)\) by \(\lambda _k(t)\) and the corresponding eigenvector by X(t), normalized as \(||X(t)||=1\). Since \(\mathbf{M }_n(t)\) is a smooth function of t around 0 and \(\lambda _k\) is a simple eigenvalue of \(\mathbf{F }_n\), we get that \(\lambda _k(t)\) is a simple eigenvalue of \(\mathbf{M }_n(t)\), \(\lambda _k(0) = \lambda _k\), and \(\lambda _k(t)\) and X(t) are smooth functions of t around 0 (see Theorems 9.7 and 9.8 in [14]). Let us also observe that \(\mathbf{M }_n(t)\) is self-adjoint, \(||X(t)|| = 1\) and \(\mathbf{M }_n(t)X(t) = \lambda _k(t)X(t)\). Therefore the Hellmann–Feynman Theorem (Theorem 1.4.7 in [17]) implies

$$\begin{aligned} \lambda _k'(t) = \langle X(t),\mathbf{M }_n'(t)X(t) \rangle = -X_1^2(t) - X_2^2(t), \end{aligned}$$
(4.4)

where \(X(t)^{\mathsf {T}} = [X_1(t),X_2(t),\ldots ,X_n(t)]\). Since X(t) is a non-zero eigenvector of the tridiagonal matrix \(\mathbf{M }_n(t)\), at least one of \(X_1(t)\) and \(X_2(t)\) is non-zero. Therefore by Eq. (4.4), there exists an open interval \(I \subset {\mathbb {R}}\) containing 0 such that \(\lambda _k'(t) ~<~ 0\) for \(t \in I\), i.e. \(\lambda _k(t)\) is decreasing on I. This implies existence of \(0 ~<~ t_0 ~<~ -b_2\) satisfying

$$\begin{aligned} \lambda _k > \lambda _k(t_0) \ge \lambda _k(-b_2) \ge {\tilde{\lambda }}_k. \end{aligned}$$

This contradicts with our assumption that \(\lambda _k = {\tilde{\lambda }}_k\). Therefore only the first case of the claim is true, i.e. \(b_1 = b_2 = 0\) and hence \(\mathbf{S }_{n,2} = \mathbf{F }_n\). \(\square \)