1 Introduction

In this paper, we will consider the following 2m-dimensional discrete Hamiltonian system

$$\begin{aligned} J(n+1)y(n+1)-J(n)y(n)=\lambda A(n){\widetilde{y}}(n)+B(n){\widetilde{y}}(n), \end{aligned}$$
(1)

where \(n\in \mathbb {N}:=\left\{ 0,1,2,...\right\} ,\) \(\lambda \) is a spectral parameter with \( \textrm{Im}\lambda \ne 0,\) \(y(n):=\begin{bmatrix} y_{1}(n)\\ y_{2}(n) \end{bmatrix}\) is a \(2m\times 1\) vector such that \(y_{1}\) and \(y_{2}\) are \(m\times 1\) vectors and \(\tilde{y}(n):=\begin{bmatrix} y_{1}(n+1)\\ y_{2}(n) \end{bmatrix},\)

$$\begin{aligned} J(n)=\left[ \begin{array}{cc} 0 &{} -E^{*}(n) \\ E(n) &{} 0 \end{array} \right] ,A(n)=\left[ \begin{array}{cc} P(n) &{} W^{*}(n) \\ W(n) &{} V(n) \end{array} \right] ,B(n)=\left[ \begin{array}{cc} K(n) &{} L(n) \\ M(n) &{} N(n) \end{array} \right] , \end{aligned}$$
(2)

E is a \(m\times m\) nonsingular matrix function so that \(\det J(n)\ne 0\) for each \(n\in \mathbb {N} \), 0 denotes the \(m\times m\) zero matrix, PWVKLMN are \(m\times m\) matrix functions such that \(E(n+1)-M(n)-\lambda W(n)\) is invertible for each \(n\in \mathbb {N} \) and complex parameter \(\lambda \) when W(n) is not a zero matrix for \( n\in \mathbb {N} \) and

$$\begin{aligned} A^{*}(n)=A(n)\ge 0,\text { }K^{*}(n)=K(n),\text { }N^{*}(n)=N(n), \text { }E(n+1)-E(n)=M(n)-L^{*}(n). \end{aligned}$$
(3)

We assume the following definiteness assumption

$$\begin{aligned} \sum \limits _{n=0}^{r}{\widetilde{y}}^{*}(n)A(n){\widetilde{y}}(n)>0 \end{aligned}$$

for sufficiently large integer \(r>0.\)

The main aim of this paper is to introduce a lower bound for the number of linearly independent summable-square solutions of (1) that seems to be new in the literature. However, before passing to the details we shall give some background information on continuous and discrete equations.

The spectral analysis of singular differential equations has been initiated by Weyl in 1910 with his pioneering paper [39]. Indeed, Weyl introduced a lower bound for the number of linearly independent integrable-square solutions of the equation

$$\begin{aligned} -(py^{\prime })^{\prime }+qy=\lambda y,\text { }x\in [0,\infty ), \end{aligned}$$
(4)

where \(\lambda \) is a spectral parameter, and pq are real-valued and locally integrable functions on \([0,\infty ),\) \(p>0,\) with the aid of the nested circles corresponding to the linearly independent solutions of (4) satisfying certain boundary conditions at regular points in \([0,\infty ).\) This approach requires symmetric boundary conditions. These results were rehandled by Titchmarsh [35] and generalized by Kodaira [20], Sims [32], Everitt [9, 10], Pleijel [26, 27] and the others to higher-order differential equations as well as the equations containing complex-valued coefficients.

In 1964, Atkinson [3] showed that Eq. (1) (with a positive weight function) can be handled as the following first-order equation

$$\begin{aligned} \left[ \begin{array}{cc} 0 &{} -1 \\ 1 &{} 0 \end{array} \right] \left[ \begin{array}{c} y \\ py^{\prime } \end{array} \right] ^{\prime }=\left\{ \lambda \left[ \begin{array}{cc} w &{} 0 \\ 0 &{} 0 \end{array} \right] +\left[ \begin{array}{cc} -q &{} 0 \\ 0 &{} p^{-1} \end{array} \right] \right\} \left[ \begin{array}{c} y \\ py^{\prime } \end{array} \right] , \end{aligned}$$
(5)

and indeed, he introduced a lower bound for the linearly independent integrable-square solutions of the following r-dimensional equation containing (5) as well as any rth-order formally symmetric differential equation [37]

$$\begin{aligned} JY^{\prime }=\left[ \lambda A+B\right] Y,\text { }x\in [a,b), \end{aligned}$$
(6)

where \(\lambda \) is a spectral parameter, JAB are \(r\times r\) matrices, J is a constant matrix with \(J^{*}=-J,\) A and B are locally integrable matrix functions with \(A^{*}=A\ge 0,\) \(B^{*}=B\) and each nontrivial solution of (6) satisfies the so-called definiteness condition

$$\begin{aligned} \int \limits _{a}^{b}Y^{*}AY>0. \end{aligned}$$

Valuable contributions on this theory have been introduced by Kogan and Rofe–Beketov [21], Lee [24], Hinton and Shaw [18, 19], Krall [22], Lesch and Malamud [25] and the others.

Although there exist a huge number of works on continuous scalar and matrix differential equations containing Weyl’s results, this is not the case for discrete equations. It seems that Hellinger is the first bringing Weyl’s results to discrete equations. Indeed, in 1922 Hellinger [11] considered the following continued fractions

$$\begin{aligned} \frac{1}{a_{1}-\lambda }\underset{-}{}\frac{b_{1}^{2}}{a_{2}-\lambda } \underset{-}{}\frac{b_{2}^{2}}{a_{3}-\lambda }\underset{-}{}\cdots , \end{aligned}$$

and discrete equation

$$\begin{aligned} b_{n}y_{n+1}=\left( a_{n}-\lambda \right) y_{n}-b_{n-1}y_{n-1}, \end{aligned}$$
(7)

where each \(a_{n}\) and \(b_{n}\) are real numbers, \(n=1,2,...,\) with \(b_{0}=0\) and \(\lambda \) is a spectral parameter. After constructing the nested circles corresponding to Eq. (7), he introduced a lower bound for the number of linearly independent summable-square solutions of ( 7). Hellinger’s work was followed by Hellinger and Wall [12], Wall and Wetzel [38] and Dennis and Wall [8]. In 1961, Akhiezer [2] constructed the nested-circles approach for Eq. (7 ) with the aid of the orthogonal polynomials on the real line. Berezanskiĭ [4] and Atkinson [3] also shared some results on Eq. (7).

In 1996, Ahlbrandt and Peterson [1] considered the following system of discrete equations

$$\begin{aligned} \begin{array}{l} y(n+1)-y(n)=A(n)y(n+1)+B(n)z(n), \\ z(n+1)-z(n)=C(n)y(n+1)-A^{*}(n)z(n), \end{array} \end{aligned}$$

on a certain discrete set, where y and z are \(m\times 1\) vector functions, ABC are \(m\times m\) matrices such that \(I_{m}-A(n)\) is nonsingular for each n on the set (\(I_{m}\) is the \(m\times m\) identity matrix), \(B^{*}(n)=B(n)\) and \(C^{*}(n)=C(n),\) whose origin appears in Atkinson’s book [3], Chapt. 3.

In 2004, Clark and Gesztesy [6] considered the following discrete equation

$$\begin{aligned} \left[ \begin{array}{cc} 0 &{} \rho (k)S^{+} \\ \rho ^{-}(k)S^{-} &{} 0 \end{array} \right] y(k)=\left[ \lambda A(k)+B(k)\right] y(k), \end{aligned}$$
(8)

where \(k= \mathbb {Z}:=\left\{ ...,-1,0,1,...\right\} ,\) y is a \(2m\times r\) matrix, \(1\le r\le 2m,\) \(S^{\pm }g(k)=g(k\pm 1),\) \(\rho ^{-}S^{-}\) is the formal adjoint of \(\rho S^{+}\), \(\rho \) is a \(m\times m\) nonsingular matrix for each \(k\in \mathbb {Z} \) with \(\rho ^{*}(k)=\rho (k),\) \(A(k)\ge 0,\) \(B^{*}(k)=B(k),\) such that

$$\begin{aligned} A(k)=\left[ \begin{array}{cc} A_{11}(k) &{} A_{12}(k) \\ A_{21}(k) &{} A_{22}(k) \end{array} \right] \end{aligned}$$

\(A_{ij}(k)\) is an \(m\times m\) matrix, \(1\le i,j\le 2,\) \(\lambda A_{12}(k)+B_{12}(k)\) is invertible for each \(k\in \mathbb {Z},\) \(\lambda \) complex parameter and

$$\begin{aligned} \sum \limits _{k\in [c,d]\cap \mathbb {Z} \subseteq \mathbb {Z} }y^{*}(k)A(k)y(k)>0. \end{aligned}$$

They introduced a lower bound for the number of linearly independent summable-square solutions of (8) using nested-circles approach.

In 2006, Shi [30] considered the following discrete Hamiltonian system

$$\begin{aligned} Jy(n+1)-Jy(n)=\left[ \lambda A(n)+B(n)\right] {\widetilde{y}}(n), \end{aligned}$$
(9)

where JAB are \(2m\times 2m\) matrices such that

$$\begin{aligned} J=\left[ \begin{array}{cc} 0 &{} -I_{m} \\ I_{m} &{} 0 \end{array} \right] \end{aligned}$$
(10)

and

$$\begin{aligned} A(n)=\left[ \begin{array}{cc} A_{1}(n) &{} 0 \\ 0 &{} A_{2}(n) \end{array} \right] \ge 0,\text { }B^{*}(n)=B(n)=\left[ \begin{array}{cc} K(n) &{} L^{*}(n) \\ L(n) &{} N(n) \end{array} \right] , \end{aligned}$$
(11)

where \(I_{m}-L(n)\) is invertible for each \(n\in \mathbb {N} \) such that each nontrivial solution of (9) is assumed to satisfy

$$\begin{aligned} \sum \limits _{n=0}^{s}{\widetilde{y}}^{*}(n)A(n){\widetilde{y}}(n)>0,\text { } s\ge s_{0},\text { }s_{0}\in \mathbb {N} . \end{aligned}$$

Shi used nested circles and the extension theory to introduce a lower bound for the number of linearly independent summable-square solutions of (9 ).

We shall note that in 2011 Shi and Sun [31] showed that operator theory is not suitable for discrete equation and they rehandled some results of [30] with the aid of the subspace theory. This theory has been used in [28, 29, 33].

The number of summable-square solutions of discrete symplectic systems has been examined in [7, 15] and as has been remarked in [15, 16] that these discrete symplectic systems contain discrete Hamiltonian systems (also see [1, 5]).

We shall note that left-definite version of Eq. (1) has been studied in [36].

In this paper, with the aid of Sylvester’s inertia indices theory and the Hermitian forms corresponding to the solutions of (1) together with the maximal subspaces we will share a lower bound for the number of linearly independent summable-square solutions of (1) by generalizing Eq. (9). Moreover, we will share the Titchmarsh–Weyl function and introduce some inequalities. Finally, we will introduce a limit-point criterion for Eq. (1).

2 Hermitian Forms

In this section, we will share some basic results and maximal subspaces related to the Hermitian forms that will allow us to construct nested ellipsoids and hence a lower bound for the number of linearly independent integrable-square solutions of (1). For the basic theory and results on quadratic forms, we refer the readers to [13, 14].

Equation (1) can be written as

$$\begin{aligned} \begin{array}{l} \left( E(n+1)-M(n)-\lambda W(n)\right) y_{1}(n+1)=E(n)y_{1}(n)+\left( N(n)+\lambda V(n)\right) y_{2}(n), \\ E^{*}(n+1)y_{2}(n+1)=\left( E^{*}(n)-L(n)-\lambda W^{*}(n)\right) y_{2}(n)-\left( K(n)+\lambda P(n)\right) y_{1}(n+1). \end{array} \end{aligned}$$
(12)

Existence and uniqueness of solutions of (1) can be obtained from the form (12) and our assumptions. Indeed, from (12) we get the following recurrence relation

$$\begin{aligned} y(n+1)=\mathbb {S}(n,\lambda )y(n),\text { }n\in \mathbb {N} , \end{aligned}$$

where

$$\begin{aligned} \mathbb {S}(n,\lambda )=\left[ \begin{array}{cc} S_{1}(n,\lambda ) &{} S_{2}(n,\lambda ) \\ S_{3}(n,\lambda ) &{} S_{4}(n,\lambda ) \end{array} \right] . \end{aligned}$$

Here \(S_{1}(n,\lambda )=(E(n+1)-M(n)-\lambda W(n))^{-1}E(n),\) \( S_{2}(n,\lambda )=(E(n+1)-M(n)-\lambda W(n))^{-1}(N(n)+\lambda V(n)),\) \( S_{3}(n,\lambda )=-E^{*-1}(n+1)(K(n)+\lambda P(n))(E(n+1)-M(n)-\lambda W(n))^{-1}E(n)\) and \(S_{4}(n,\lambda )=E^{*-1}(n+1)(E^{*}(n)-L(n)-\lambda W^{*}(n))^{-1}-E^{*-1}(n+1)(K(n)+\lambda P(n))(E(n+1)-M(n)-\lambda W(n))^{-1}(N(n)+\lambda V(n)).\) A direct calculation and assumptions (3) show that

$$\begin{aligned} \mathbb {S}^{*}(n,{\overline{\lambda }})J(n+1)\mathbb {S}(n,\lambda )=\left[ \begin{array}{cc} 0 &{} -E^{*}(n) \\ E(n) &{} E(n)-E^{*}(n) \end{array} \right] . \end{aligned}$$
(13)

Therefore, from (13) we obtain that

$$\begin{aligned} \left| \det \mathbb {S}(n,\lambda )\right| ^{2}=\frac{\det J(n)}{\det J(n+1)},\text { }n\in \mathbb {N} . \end{aligned}$$

For \(y=y(n)\) and \(z=z(n),\) \(n\in \mathbb {N},\) we shall adopt the notation

$$\begin{aligned} \left\langle y,z\right\rangle \mid _{s}^{r}=\sum \limits _{n=s}^{r}{\widetilde{z}}^{*}(n)A(n){\widetilde{y}}(n), \end{aligned}$$

where \(s,r\in \mathbb {N} \) with \(s<r.\)

Let \(\mathcal {D}\) be the set of all solutions \(y(n,\lambda ),\) \(n\in \mathbb {N},\) of (1). For \(y(n,\lambda ),z(n,\mu )\in \mathcal {D}\), one gets the following

$$\begin{aligned} \begin{array}{l} \lambda \sum \limits _{n=s}^{r}{\widetilde{z}}^{*}(n)A(n){\widetilde{y}} (n)=\sum \limits _{n=s}^{r}\left\{ -z_{1}^{*}(n+1)E^{*}(n+1)y_{2}(n+1)+z_{2}^{*}(n)E(n+1)y_{1}(n+1)\right. \\ +z_{1}^{*}(n+1)E^{*}(n)y_{2}(n)-z_{2}^{*}(n)E(n)y_{1}(n)-z_{1}^{*}(n+1)K(n)y_{1}(n+1) \\ \left. -z_{1}^{*}(n+1)L(n)y_{2}(n)-z_{2}^{*}(n)M(n)y_{1}(n+1)-z_{2}^{*}(n)N(n)y_{2}(n)\right\} \end{array} \end{aligned}$$
(14)

and

$$\begin{aligned} \begin{array}{l} {\overline{\mu }}\sum \limits _{n=s}^{r}{\widetilde{z}}^{*}(n)A(n)\widetilde{y }(n)=\sum \limits _{n=s}^{r}\left\{ z_{1}^{*}(n+1)E^{*}(n+1)y_{2}(n)-z_{2}^{*}(n+1)E(n+1)y_{1}(n+1)\right. \\ -z_{1}^{*}(n)E^{*}(n)y_{2}(n)+z_{2}^{*}(n)E(n)y_{1}(n+1)-z_{1}^{*}(n+1)K(n)y_{1}(n+1) \\ \left. -z_{1}^{*}(n+1)M^{*}(n)y_{2}(n)-z_{2}^{*}(n)L^{*}(n)y_{1}(n+1)-z_{2}^{*}(n)N(n)y_{2}(n)\right\} \end{array} \end{aligned}$$
(15)

Using (14), (15) and (3), we get that

$$\begin{aligned} \begin{array}{l} (\lambda -{\overline{\mu }})\sum \limits _{n=s}^{r}{\widetilde{z}}^{*}(n)A(n) {\widetilde{y}}(n)=\sum \limits _{n=s}^{r}\left\{ -z_{1}^{*}(n+1)E^{*}(n+1)y_{2}(n+1)\right. \\ +z_{2}^{*}(n)[-E(n+1)+M(n)]y_{1}(n+1)+z_{1}^{*}(n+1)[E^{*}(n)-L(n)]y_{2}(n) \\ -z_{2}^{*}(n)E(n)y_{1}(n)+z_{1}^{*}(n+1)[E^{*}(n+1)-M^{*}(n)]y_{2}(n) \\ -z_{2}^{*}(n+1)E(n+1)y_{1}(n+1)+z_{1}^{*}(n)E^{*}(n)y_{2}(n) \\ \left. +z_{2}^{*}(n)[-E(n)+L^{*}(n)]y_{1}(n+1)\right\} \\ =\left[ \begin{array}{cc} z_{1}^{*}(r+1) &{} z_{2}^{*}(r+1) \end{array} \right] \left[ \begin{array}{cc} 0 &{} -E^{*}(r+1) \\ E(r+1) &{} 0 \end{array} \right] \left[ \begin{array}{c} y_{1}(n+1) \\ y_{2}(n+1) \end{array} \right] \\ -\left[ \begin{array}{cc} z_{1}^{*}(s) &{} z_{2}^{*}(s) \end{array} \right] \left[ \begin{array}{cc} 0 &{} -E^{*}(s) \\ E(s) &{} 0 \end{array} \right] \left[ \begin{array}{c} y_{1}(s) \\ y_{2}(s) \end{array} \right] . \end{array} \end{aligned}$$

Hence, for \(y(n,\lambda )\in \mathcal {D}\) we have the following

$$\begin{aligned} 2\textrm{Im}\lambda \left\langle y,y\right\rangle \mid _{s}^{r}=[y,y]\mid _{s}^{r+1}, \end{aligned}$$
(16)

where \([y,y]\mid _{s}^{r+1}=[y,y](r+1)-[y,y](s)\) and

$$\begin{aligned}{}[y,y](n)=y^{*}(n)\left( J(n)/i\right) y(n). \end{aligned}$$
(17)

The form \([.,.]:=[g,g]\) represents a Hermitian form and such forms admit Sylvester’s inertia indices theory. To use this theory, we shall first write the equivalent form of (17) as the following

$$\begin{aligned} \begin{array}{ll} 2[y,y](n)= &{} \left( y_{1}(n)+iE^{*}(n)y_{2}(n)\right) ^{*}\left( y_{1}(n)+iE^{*}(n)y_{2}(n)\right) \\ &{} -\left( y_{1}(n)-iE^{*}(n)y_{2}(n)\right) ^{*}\left( y_{1}(n)-iE^{*}(n)y_{2}(n)\right) . \end{array} \end{aligned}$$
(18)

(18) shows that on a finite-dimensional set the Hermitian form [., .](n) can be represented as a sum of \(\textbf{i}_{+}(n)\) squares of absolute values minus \(\textbf{i}_{-}(n)\) squares of absolute values at each \(n\in \mathbb {N} \) such that \(\textbf{i}_{+}(n)+\textbf{i}_{-}(n)\) represents the number of linearly independent linear forms, where

$$\begin{aligned} \textbf{i}_{+}(n)\le m,\text { }\textbf{i}_{-}(n)\le m. \end{aligned}$$
(19)

The numbers \(\textbf{i}_{+}(n)\) and \(\textbf{i}_{-}(n)\) are called positive and negative, respectively, inertia indices of the Hermitian form [., .](n) at \(n\in \mathbb {N}.\)

An interesting property of the Hermitian form is the following.

Lemma 2.1

Let \(\textrm{Im}\lambda \ne 0\) in (1). Then the inertia indices \(\textbf{i}_{+}(n)\) and \(\textbf{i}_{-}(n)\) of [., .](n) on \(\mathcal {D}\) are independent from \(n\in \mathbb {N} \) and \(\textbf{i}_{+}(n)=\textbf{i}_{-}(n)=m\) at any \( n\in \mathbb {N}.\)

Proof

We shall consider Eq. (16) for sufficiently large r and \( \textrm{Im}\lambda >0.\) Let the positive and negative inertia indices of [yy](r) and [yy](s) be \((\textbf{i}_{+}(r),\textbf{i}_{-}(r))\) and \(( \textbf{i}_{+}(s),\textbf{i}_{-}(s)),\) respectively. Then the right-hand side of (16) can be written as a sum of \(\textbf{i}_{+}(r)+\textbf{i} _{-}(s)\) squares minus \(\textbf{i}_{+}(s)+\textbf{i}_{-}(r)\) squares. Now we suppose that

$$\begin{aligned} \textbf{i}_{+}(r)+\textbf{i}_{-}(s)<2m. \end{aligned}$$
(20)

If we equate the \(\textbf{i}_{+}(r)+\textbf{i}_{-}(s)\) squares to zero in (16), then we obtain a positive-dimensional subspace of the solution space of (1) because of the assumption (20). Now for \(\textrm{Im} \lambda >0\) and a function y belonging to this space, we get that the left-hand side of (16) is nonnegative but the right-hand side of (16) is nonpositive. Therefore, \(y(n)\equiv 0,\) \(n\in \mathbb {N},\) and this contradicts to (20). Hence, we should have

$$\begin{aligned} 2m\le \textbf{i}_{+}(r)+\textbf{i}_{-}(s). \end{aligned}$$
(21)

On the other side, from (19) and (21) we have

$$\begin{aligned} 2m\le \textbf{i}_{+}(r)+\textbf{i}_{-}(s)\le 2m \end{aligned}$$

and \(\textbf{i}_{+}(r)=\textbf{i}_{-}(s)=m.\) Since this is true for each r and s, the proof is completed. \(\square \)

Theory of quadratic forms and Lemma 2.1 allow us to consider the subspaces \( D_{r}^{-},\) \(D_{0}^{+}\) of dimension m,  but of no higher dimension, on which \([.,.](r)\le 0\) and \([.,.](0)\ge 0,\) respectively, and equalities hold for only zero functions; and \(D_{r}^{+},\) \(D_{0}^{-}\) of dimension m,  but of no higher dimension, on which \([.,.](r)\ge 0\) and \([.,.](0)\le 0,\) respectively, and equalities hold for only zero functions.

Consider an element \(y\in D_{r}^{-}\cap D_{0}^{+}\) for \(\textrm{Im}\lambda >0.\) (16) implies that \(y=0.\) Hence, \(\mathcal {D}\) has the representation

$$\begin{aligned} \mathcal {D}=D_{r}^{-}\oplus D_{0}^{+},\text { }\textrm{Im}\lambda >0. \end{aligned}$$
(22)

Similarly, for \(y\in D_{r}^{+}\cap D_{0}^{-}\) and \(\textrm{Im}\lambda <0\) we get from (16) that \(y=0\), and hence,

$$\begin{aligned} \mathcal {D}=D_{r}^{+}\oplus D_{0}^{-},\text { }\textrm{Im}\lambda <0. \end{aligned}$$
(23)

Consider that \(\chi \in \mathcal {D}\). From (22), one may infer that \( \chi =\alpha +\beta ,\) where \(\alpha \in D_{r}^{-}\) and \(\beta \in D_{0}^{+}\) for \(\textrm{Im}\lambda >0.\) Then from (16), we get that

$$\begin{aligned}{}[\chi -\beta ,\chi -\beta ](0)+2\textrm{Im}\lambda \sum \limits _{n=0}^{r}\left( {\widetilde{\chi }}(n)-{\widetilde{\beta }} (n)\right) ^{*}A(n)\left( {\widetilde{\chi }}(n)-\widetilde{\beta } (n)\right) \le 0. \end{aligned}$$
(24)

Let \(\beta _{1},...,\beta _{m}\) be a base of \(D_{0}^{+}\) so that \(\beta \) can be represented by

$$\begin{aligned} \beta =\sum \limits _{k=1}^{m}{\widetilde{c}}_{k}\beta _{k}, \end{aligned}$$
(25)

where \({\widetilde{c}}_{k}^{\prime }s\) are constants. Note that the left-hand side of (24) is positive definite on \(D_{0}^{+}.\) (24) and (25) show that the left-hand side of (24) is quadratic in \(\widetilde{ c}=({\widetilde{c}}_{1},...,{\widetilde{c}}_{m}).\)

Let \(\mathcal {E}(r)\) be the ellipsoid consisting of all m-tuples \( {\widetilde{c}}=({\widetilde{c}}_{1},...,{\widetilde{c}}_{m})\) appearing in the representation (25) and satisfying (24). This set is not empty as \(\beta \in D_{0}^{+}\) with the representation (25) satisfies (24). Moreover, (24) also implies that

$$\begin{aligned} \mathcal {E}(r_{1})\subset \mathcal {E}(r),\text { }r<r_{1}, \end{aligned}$$
(26)

where r is a sufficiently large positive integer. Note that (26) is possible as A(n) is not the identically zero matrix on \( \mathbb {N}.\) Therefore, using (24)-(26) we obtain the following.

Theorem 2.2

\(\lim _{r\rightarrow \infty }\mathcal {E}(r)= \mathcal {E}(\infty )\) is not empty.

Using Theorem 2.2, we may infer that there exists an m-tuple \( c=(c_{1},...,c_{m})\) belonging to all the sets \(\mathcal {E}(r),\) \(r\in \mathbb {N}.\) Let us use this m-tuple in (25). Then by (24), we obtain that

$$\begin{aligned}{}[\chi -\beta ,\chi -\beta ](0)+2\textrm{Im}\lambda \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }(n)-{\widetilde{\beta }} (n)\right) ^{*}A(n)\left( {\widetilde{\chi }}(n)-{\widetilde{\beta }} (n)\right) \le 0,\text { }\textrm{Im}\lambda >0. \end{aligned}$$

Hence, we may introduce the following.

Theorem 2.3

Let \(\chi \in \mathcal {D}\) and \( \beta \in D_{0}^{+}.\) Then for \(\textrm{Im}\lambda >0\), we have

$$\begin{aligned} \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }(n)-{\widetilde{\beta }} (n)\right) ^{*}A(n)\left( {\widetilde{\chi }}(n)-{\widetilde{\beta }} (n)\right) <\infty . \end{aligned}$$

Now consider that \(\chi _{1},...,\chi _{m}\) be a completion of the base \( \beta _{1},...,\beta _{m}\) of \(D_{0}^{+}\) to a base of \(\mathcal {D}.\) Then we obtain the following.

Theorem 2.4

For \(k=1,...,m\) and \(\textrm{Im} \lambda >0\), we obtain that

$$\begin{aligned} \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }_{k}(n)-{\widetilde{\beta }}_{k}(n)\right) ^{*}A(n)\left( {\widetilde{\chi }}_{k}(n)-{\widetilde{\beta }}_{k}(n)\right) <\infty . \end{aligned}$$
(27)

For \(\textrm{Im}\lambda <0\) using (23) and similar steps introduced above, we may share the following results.

Theorem 2.5

Let \(\chi \in \mathcal {D}\), \(\beta \in D_{0}^{-}\) and \(\chi _{1},...,\chi _{m}\) be a completion of the base \(\beta _{1},...,\beta _{m}\) of \(D_{0}^{-}\) to a base of \(\mathcal {D}.\) Then for \(\textrm{Im}\lambda <0\), one gets that

$$\begin{aligned} \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }(n)-{\widetilde{\beta }} (n)\right) ^{*}A(n)\left( {\widetilde{\chi }}(n)-{\widetilde{\beta }} (n)\right) <\infty , \end{aligned}$$

and for \(k=1,...,m\) and \(\textrm{Im}\lambda <0\), one obtains that

$$\begin{aligned} \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }_{k}(n)-{\widetilde{\beta }}_{k}(n)\right) ^{*}A(n)\left( {\widetilde{\chi }}_{k}(n)-{\widetilde{\beta }}_{k}(n)\right) <\infty . \end{aligned}$$
(28)

With the aid of (27) and (28), we may introduce the following.

Corollary 2.6

There exist at least m-linearly independent solutions of (1) satisfying

$$\begin{aligned} \sum \limits _{n=0}^{\infty }{\widetilde{y}}^{*}(n)A(n){\widetilde{y}} (n)<\infty , \end{aligned}$$
(29)

for \(\textrm{Im}\lambda \ne 0.\)

3 Maximal Nullspace

In this section, we will show that the results shared in Sect. 2 can also be obtained with the aid of a nullspace which is maximal in \(\mathcal {D}\), and using these results, we will be able to introduce the Titchmarsh–Weyl matrix.

Let \(\mathbb {D}_{0}\) be an m-dimensional subspace of \(\mathcal {D}\) such that \([y,z](0)=0\) for all y and z belonging to this subspace. This is possible as the positive and negative inertia indices of the Hermitian form [., .] at \(n=0\) satisfy \(\textbf{i}_{+}=\textbf{i}_{-}=m\) and, hence, \(\min \left\{ \textbf{i}_{+},\textbf{i}_{-}\right\} =m.\) Note that \(\mathbb {D}_{0}\) is maximal in \(\mathcal {D}\) on which \([.,.](0)=0.\) For \(y\in D_{r}^{-}\cap \mathbb {D}_{0}\), we obtain from (16) that \(y=0\) for \(\textrm{Im}\lambda >0.\) Hence, one has

$$\begin{aligned} \mathcal {D}=D_{r}^{-}\oplus \mathbb {D}_{0},\text { }\textrm{Im}\lambda >0. \end{aligned}$$
(30)

Similarly, for \(y\in D_{r}^{+}\cap \mathbb {D}_{0}\) we obtain from (16) that \(y=0\) for \(\textrm{Im}\lambda <0\), and hence,

$$\begin{aligned} \mathcal {D}=D_{r}^{+}\oplus \mathbb {D}_{0},\text { }\textrm{Im}\lambda <0. \end{aligned}$$
(31)

Using the representations (30) and (31), we may introduce for \( \chi \in \mathcal {D}\) and \(\beta \in \mathbb {D}_{0}\) that

$$\begin{aligned}{}[\chi -\beta ,\chi -\beta ](0)+2\textrm{Im}\lambda \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }(n)-{\widetilde{\beta }} (n)\right) ^{*}A(n)\left( {\widetilde{\chi }}(n)-{\widetilde{\beta }} (n)\right) \le 0,\text { }\textrm{Im}\lambda >0, \end{aligned}$$
(32)

and

$$\begin{aligned}{}[\chi -\beta ,\chi -\beta ](0)+2\textrm{Im}\lambda \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }(n)-{\widetilde{\beta }} (n)\right) ^{*}A(n)\left( {\widetilde{\chi }}(n)-{\widetilde{\beta }} (n)\right) \ge 0,\text { }\textrm{Im}\lambda <0, \end{aligned}$$
(33)

together with the following results.

Theorem 3.1

Let \(\chi \in \mathcal {D}\) and \( \beta \in \mathbb {D}_{0}\) and \(\chi _{1},...,\chi _{m}\) be a completion of the base \(\beta _{1},...,\beta _{m}\) of \( \mathbb {D}_{0}\) to a base of \(\mathcal {D}.\) Then for \( \textrm{Im}\lambda \ne 0\), one gets that

$$\begin{aligned} \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }(n)-{\widetilde{\beta }} (n)\right) ^{*}A(n)\left( {\widetilde{\chi }}(n)-{\widetilde{\beta }} (n)\right) <\infty \end{aligned}$$

and for \(k=1,...,m\) and \(\textrm{Im}\lambda \ne 0\), one obtains that

$$\begin{aligned} \sum \limits _{n=0}^{\infty }\left( \widetilde{\chi }_{k}(n)-{\widetilde{\beta }}_{k}(n)\right) ^{*}A(n)\left( {\widetilde{\chi }}_{k}(n)-{\widetilde{\beta }}_{k}(n)\right) <\infty . \end{aligned}$$

4 Titchmarsh–Weyl Matrix

Let \(\theta _{1},...,\theta _{m},\varphi _{1},...,\varphi _{m}\) be the linearly independent solutions of (1) and we shall construct the following \(2m\times 2m\) matrix

$$\begin{aligned} U=\left[ \begin{array}{cc} \Theta&\Phi \end{array} \right] =\left[ \begin{array}{cc} \Theta _{1} &{} \Phi _{1} \\ \Theta _{2} &{} \Phi _{2} \end{array} \right] , \end{aligned}$$

satisfying \(U(0)=I_{2m},\) where \(\Theta _{1},\Theta _{2},\Phi _{1},\Phi _{2}\) are \(m\times m\) matrices, \(\Theta =\left[ \begin{array}{ccc} \theta _{1}&\cdots&\theta _{m} \end{array} \right] ,\) \(\Phi =\left[ \begin{array}{ccc} \varphi _{1}&\cdots&\varphi _{m} \end{array} \right] \) and \(I_{2\,m}\) is the identity matrix of dimension \(2\,m.\)

Note that \([\Phi ,\Phi ](0)=\mathbb {O}\), where \(\mathbb {O}\) is the \(m\times m \) zero matrix. Therefore, \(\varphi _{1},...,\varphi _{m}\in \mathbb {D}_{0}.\) Hence, Theorem 3.1 implies the following

$$\begin{aligned} \sum \limits _{n=0}^{\infty }{\widetilde{\Psi }}^{*}(n)A(n){\widetilde{\Psi }} (n)<\infty ,\quad \textrm{Im}\lambda \ne 0, \end{aligned}$$
(34)

where \(\Psi (n)=\Theta (n)-\Phi (n)H\) and H is a \(m\times m\) matrix as

$$\begin{aligned} H=\left[ \begin{array}{ccc} c_{11} &{} \cdots &{} c_{m1} \\ \vdots &{} &{} \vdots \\ c_{1m} &{} \cdots &{} c_{mm} \end{array} \right] . \end{aligned}$$

Now we shall define the following

$$\begin{aligned} U^{*}(J/i)U=\varepsilon \left[ \begin{array}{cc} \varvec{A} &{} \varvec{B}^{*} \\ \varvec{B} &{} \varvec{C} \end{array} \right] , \end{aligned}$$

where \(\varepsilon =1\) when \(\textrm{Im}\lambda >0\) and \(\varepsilon =-1\) when \(\textrm{Im}\lambda <0.\) On the other side, one gets that

$$\begin{aligned} U^{*}(J/i)U=\left[ \begin{array}{cc} i\left( \Theta _{1}^{*}E^{*}\Theta _{2}-\Theta _{2}^{*}E\Theta _{1}\right) &{} i\left( \Theta _{1}^{*}E^{*}\Phi _{2}-\Theta _{2}^{*}E\Phi _{1}\right) \\ i\left( \Phi _{1}^{*}E^{*}\Theta _{2}-\Phi _{2}^{*}E\Theta _{1}\right) &{} i\left( \Phi _{1}^{*}E^{*}\Phi _{2}-\Phi _{2}^{*}E\Phi _{1}\right) \end{array} \right] . \end{aligned}$$

Hence, we may introduce the following.

Theorem 4.1

For sufficiently large r\(\varvec{C} (r)>0.\)

Proof

A direct calculation shows that

$$\begin{aligned} \Phi ^{*}(r)(J(r)/i)\Phi (r)=\left\{ \begin{array}{c} 2\textrm{Im}\lambda \sum \limits _{n=0}^{r-1}{\widetilde{\Phi }}^{*}(n)A(n) {\widetilde{\Phi }}(n),\quad \textrm{Im}\lambda >0, \\ -2\textrm{Im}\lambda \sum \limits _{n=0}^{r-1}{\widetilde{\Phi }}^{*}(n)A(n) {\widetilde{\Phi }}(n),\quad \textrm{Im}\lambda <0, \end{array} \right. \end{aligned}$$

and this completes the proof. \(\square \)

Corollary 4.2

As r increases, \(\varvec{C} (r)\) nondecreases.

Using Corollary 2.6 and (34), we may introduce the following.

Theorem 4.3

Suppose that the equation (1) has \(s-\) linearly independent solutions, \(m\le s\le 2m,\) satisfying (29) and let \(\mu _{1}(r)\le ...\le \mu _{m}(r)\) be the eigenvalues of \(\varvec{C}(r).\) Then \(\mu _{1}(r)\le ...\le \mu _{(s-m)}(r)\) remain finite and the others go to infinity as \(r\rightarrow \infty .\)

Proof

Let \(\Psi (n)=\Phi (n)e_{r},\) where \(e_{r}\) is an unit eigenvector of \( \varvec{C}(r).\) Then

$$\begin{aligned} 2\textrm{Im}\lambda \sum \limits _{n=0}^{r-1}{\widetilde{\Psi }}^{*}(n)A(n) {\widetilde{\Psi }}(n)=e_{r}^{*}\Phi ^{*}(r)(J(r)/i)\Phi (r)e_{r}=\left\{ \begin{array}{c} \mu (r),\quad \textrm{Im}\lambda >0, \\ -\mu (r),\quad \textrm{Im}\lambda <0, \end{array} \right. \end{aligned}$$

where \(\mu (r)<\infty ,\) and

$$\begin{aligned} \sum \limits _{n=0}^{r-1}{\widetilde{\Psi }}^{*}(n)A(n){\widetilde{\Psi }} (n)\le \frac{\mu (r)}{\left| 2\textrm{Im}\lambda \right| }<\infty . \end{aligned}$$

Now we shall choose a convergent subsequence of \(\left\{ e_{r}\right\} \) as \( r\rightarrow \infty \) and let us construct a solution \(U(n)=\Phi (n)e\) of ( 1) such that

$$\begin{aligned} \sum \limits _{n=0}^{\infty }{\widetilde{\Psi }}^{*}(n)A(n){\widetilde{\Psi }} (n)<\infty . \end{aligned}$$

However, from (34) we know that \(m-\) linearly independent summable-square solutions come from \(\Psi (n)=\Theta (n)-\Phi (n)H\) and this completes the proof. \(\square \)

5 Conclusion and Remarks

In this paper, we have introduced a lower bound for the number of summable-square solutions of (1) using Pleijel’s idea [26, 27] on nested ellipsoids corresponding to the Hermitian forms. It seems that the form (1) is new in the literature and contains (8 ) and (9). Indeed, if J(n) is chosen as a constant matrix satisfying \(J^{*}=-J,\) then using (3) we get that \(M(n)=L^{*}(n)\), and hence, by (2) \(B^{*}(n)=B(n)\) for each \(n\in \mathbb {N}.\)

In Sect. 2, we have shown that nested ellipsoids for discrete Hamiltonian systems (1) can be constructed without symmetric boundary conditions and we have proved that at least \(m-\)linearly independent solutions of (1) are summable-square on \( \mathbb {N}.\) We have also shown that the results can be obtained if one considers a nullspace at the regular end point. The secondary construction helped us to construct the Titchmarsh–Weyl matrix of (1).

Using the Titchmarsh–Weyl matrix, we may introduce some additional results. Indeed let us consider the matrix \(\Psi (n)=\Theta (n)-\Phi (n)H\) defined in Sect. 4. Using (32) and (33), we may introduce the following inequalities

$$\begin{aligned} 2\textrm{Im}\lambda \sum \limits _{n=0}^{\infty }\widetilde{\Psi }^{*}(n)A(n){\widetilde{\Psi }}(n)\le iE^{*}(0)H-iH^{*}E(0),\quad \textrm{ Im}\lambda >0, \end{aligned}$$

and

$$\begin{aligned} 2\textrm{Im}\lambda \sum \limits _{n=0}^{\infty }\widetilde{\Psi }^{*}(n)A(n){\widetilde{\Psi }}(n)\ge iE^{*}(0)H-iH^{*}E(0),\quad \textrm{Im}\lambda <0. \end{aligned}$$

For continuous Hamilonian system (6), Hinton and Shaw [17] characterized the case

$$\begin{aligned} \lim \limits _{x\rightarrow \infty }Z^{*}(x,{\overline{\lambda }} )JY(x,\lambda )=0, \end{aligned}$$

where Y is a solution of (6) and Z is a solution of

$$\begin{aligned} JZ^{\prime }=\left( {\overline{\lambda }}A+B\right) Z, \end{aligned}$$

as the limit-point case for (6) at the singular point infinity (also see [34]). Following this definition, we shall define the limit-point case for (1) at infinity with the aid of the limit of the Hermitian form as

$$\begin{aligned}{}[y,z](\infty ):=\lim _{n\rightarrow \infty }z^{*}(n)\left( J(n)/i\right) y(n)=0, \end{aligned}$$

for the solutions of (1) having finite values at any \(n\in \mathbb {N} \) in the Hermitian form [yz]. Indeed, this is possible if one considers the subset \(\mathcal {D}[ \mathbb {N} ]\) of \(\mathcal {D}\) consisting of all the functions as summable-square on \( \mathbb {N}.\) Then we may introduce the following.

Theorem 5.1

Assume that J(n) is bounded and \( A(n)\ge \tau I_{2m}\) on \( \mathbb {N},\) where \(\tau >0.\) Then \([y,z](\infty )=0\) for \(y,z\in \mathcal {D}[ \mathbb {N} ].\)

Proof

Since J(n) is bounded on \( \mathbb {N}\), we obtain that

$$\begin{aligned} \begin{array}{l} \sum \limits _{n=1}^{\infty }\left| \left[ \begin{array}{cc} z_{1}^{*}(n) &{} z_{2}^{*}(n) \end{array} \right] \left[ \begin{array}{cc} 0 &{} -E^{*}(n) \\ E(n) &{} 0 \end{array} \right] \left[ \begin{array}{c} y_{1}(n) \\ y_{2}(n) \end{array} \right] \right| \\ \le const.\sum \limits _{k=1}^{2}\left\{ \left( \sum \limits _{n=1}^{\infty }\left| z_{k}(n)\right| ^{2}\right) ^{1/2}\left( \sum \limits _{n=1}^{\infty }\left| y_{3-k}(n)\right| ^{2}\right) ^{1/2}\right\} . \end{array} \end{aligned}$$
(35)

Now suppose that

$$\begin{aligned}{}[y,z](\infty )\ne 0. \end{aligned}$$
(36)

Because of our assumption, we have

$$\begin{aligned} \sum \limits _{n=0}^{\infty }{\widetilde{y}}^{*}(n)A(n){\widetilde{y}}(n)\ge \tau \sum \limits _{n=0}^{\infty }\left( \left| y_{1}(n+1)\right| ^{2}+\left| y_{2}(n)\right| ^{2}\right) . \end{aligned}$$
(37)

(37) implies that (35) is finite. However, this contradicts to our assumption (36) and this completes the proof. \(\square \)

We shall note that Theorem 5.1 is the discrete version of Krall’s result [23] on Eq. (6).