1 Introduction

Recently, I have been engaged in computer assisted existence proof of periodic solutions for nonlinear delay differential equations. For the purpose, I have used the verified numerical computations. In this process, using the spectral method, I have calculated Galerkin’s approximations of periodic solutions for various delay differential equations. Then, linearizing original nonlinear equations around the calculated Galerkin’s approximations. An interesting and surprising computer experimental observation is that the minimum singular values of coefficient matrices of such linearized equations are unchanged even if dimensions of Galerkin’s equations are increased provided that orders of Galerkin’s approximations are sufficiently large.

In order to understand this phenomenon, in Sect. 2, it is presented three classes of real square matrices, say G’s, modeling coefficients matrices of linearized Galerkin’s equations for certain first order nonlinear delay differential equation with smooth nonlinearity. We show computer experimental observations stating that the minimum singular values of such classes of matrices are unchanged even if orders of matrices are increased. Taking the time variant Hutchinson equation as example, it is also pointed out that this property holds for a wide class of coefficients matrices of linearized Galerkin’s equations for nonlinear delay differential equation with smooth nonlinearity.

In Sect. 3, Theorem 1 is presented based on the Schur complement. This theorem gives a lower bound of the minimum singular value of a certain class of \(2 \times 2\) block matrices. In case of calculating the minimum singular value of an \(n \times n\) complex matrix G, it is a sharpen version of the asymptotic diagonal dominant matrix theory presented by Oishi and Sekine [1], because in [1] for the estimation of \(\Vert G^{-1}\Vert _2\), which is the reciprocal of the minimum singular value of G, an overestimation \(\Vert G^{-1}\Vert _2 \leqq \Vert G^{-1}\Vert _{\infty }\Vert G^{-1}\Vert _1\) is used. Here, n is positive integer, and \(\Vert G\Vert _2, \Vert G\Vert _{\infty }\) and \(\Vert G\Vert _1\) are matrix norms induced by the Euclid norm, \(\infty \)-norm and 1-norm in \(\mathbb {C}^n\), respectively.

In Sect. 4, it is shown that tight lower bounds of the minimum singular values for matrices G’s presented in Sect. 2 as examples can be derived using Theorem 1 stated in Sect. 3. These lower bounds of the minimum singular values are unchanged even if orders of matrices are increased. Section 5 gives conclusions.

2 Class of matrices modeling coefficient matrices of linearized Galerkin’s equations

Let p and q be positive integers. Let \(M_{p,q}(\mathbb {C})\) and \(M_{p,q}(\mathbb {R})\) be the set of all \(p \times q\) complex and real matrices, respectively. Let \(G = (G_{ij}) \in M_{p,q}(\mathbb {C})\), and \(G^* = (\overline{G}_{ji}) \in M_{q,p}(\mathbb {C})\) be its Hermitian conjugate. Here, \(G_{ij}\) is the i-jth element of G, and \(\overline{G}_{ji} \) is the complex conjugate of \(G_{ji}\). The singular values of G are the square roots of eigenvalues of \(GG^*\) or \(G^*G\). Denote the smallest and the largest singular values of G as \(\sigma _{\text {min}}(G)\) and \(\sigma _{\text {max}}(G)\), respectively. We call \(\sigma _{\text {min}}(G)\) as the minimum singular value of G. When there is no confusion, we denote simply \(\sigma _{\text {min}}(G)\) and \(\sigma _{\text {max}}(G)\) as \(\sigma _{\text {min}}\) and \(\sigma _{\text {max}}\), respectively. The matrix norm \(\Vert G\Vert _2\) is given by \(\Vert G\Vert _2=\sigma _{\text {max}}(G)\).

Let n be a positive integer. We simply denote \(M_{n,n}(\mathbb {C})\) and \(M_{n,n}(\mathbb {R})\) as \(M_n(\mathbb {C})\) and \(M_n(\mathbb {R})\), respectively. If \(G \in M_n(\mathbb {C})\) is invertible, \(\Vert G^{-1}\Vert _2=1/\sigma _{\text {min}}(G)\).

2.1 Example 1

In the first place, let us consider the following matrix \(G\in M_n(\mathbb {R})\) with n being a positive integer:

$$\begin{aligned} G= \left( \begin{array}{ccccccccc} 1 &{} 3 &{} 0 &{} 0 &{} 0 &{} \cdots &{} 0 &{} 0 &{} 0\\ 2&{} 2 &{} 3 &{} 0 &{} 0 &{} \cdots &{} 0 &{} 0 &{}0\\ 0 &{} 2 &{} 3 &{} 3 &{} 0 &{} \cdots &{} 0 &{} 0 &{}0\\ {} &{}\vdots &{}&{}&{}&{}\ddots \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \cdots &{} 2&{}n-1 &{} 3 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \cdots &{} 0&{}2&{} n \\ \end{array} \right) . \end{aligned}$$
(1)

Remark 1

Here, I would like to present a remark why I consider this type of matrix. I have been engaged in computer assisted existence proof of periodic solutions for nonlinear delay differential equations of the following form:

$$\begin{aligned} \frac{dx(t)}{dt}=f(x(t), x(t-\tau )). \end{aligned}$$
(2)

Here, \(\tau \) is a positive real constant expressing a delay, and f is a sufficiently smooth function of x(t) and \(x(t-\tau )\). In order to prove the existence of periodic solutions, in the first place, I usually calculate Galerkin’s approximations of periodic solutions. Then, in the process of proving the existence of exact solutions, we linearize the original nonlinear equations around calculated Galerkin’s approximations. If we want to have \(2\pi \) periodic solutions, in the processes of calculating Galerkin’s approximations, x(t) is assumed to have a form

$$\begin{aligned} x(t) = a_0 + \sum _{n=1}^n (a_n \cos {n t} + b_n\sin {nt}). \end{aligned}$$
(3)

Then,

$$\begin{aligned} \frac{dx(t)}{dt} = \sum _{n=1}^n (-n a_n \sin {n t} + nb_n\cos {nt}). \end{aligned}$$
(4)

The (\(2 \times 2\) block) diagonal elements of the coefficient matrices of linearized Galerkin’s equations correspond to coefficients of \(a_n\cos {nt}\) or \(a_n\sin {nt}\). Thus, to treat linearized Galerkin’s equations realistic way, we need to consider block matrices. However, to make the things easy and to reveal essence, I would like to consider the simplified matrix G defined by Eq. (1). In proving the existence of periodic solutions using verified numerical computations, some times we should take large n, say n is more than \(10^7\). See such an example in Ref. [1]. In this paper, we consider such a situation.

The diagonal part of G being \((1,2, \ldots , n-1, n)\) is a reflection of considering the first order differential equation. Non-zero off-diagonal elements in G is coming both from the existence of nonlinear terms and delay terms. Sparsity of G is coming from polynomial nonlinearity. \(\square \)

I have found using numerical experiments that the minimum singular value of G is experimentally almost unchanged even if we increase n and given as

$$\begin{aligned} \frac{1}{\sigma _{\text {min}}} \approx 3.12221 \end{aligned}$$
(5)

provided that \(n \geqq 100\).

Remark 2

If we can prove that the minimum singular value of G is unchanged even if we increase the order of matrix G, it is quite useful for the computer assisted existence proof of solutions for nonlinear differential equations. This is a motivation of the present paper. \(\square \)

2.2 Example 2

Let \(n >5\) be a positive integer. Next, let us consider \(G \in M_n(\mathbb {R})\) defined by

$$\begin{aligned} G= \left( \begin{array}{cccccccccccccc} 1 &{} 3 &{} 2 &{} 6 &{} 7 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \cdots &{} 0 &{} 0 &{} 0\\ 5 &{} 4 &{} 7 &{} 9 &{} 9 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} \cdots &{} 0 &{} 0 &{} 0\\ 5 &{} 8 &{} 4 &{} 7 &{} 10 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} \cdots &{} 0 &{} 0 &{} 0\\ 1 &{} 2 &{} 10 &{} 5 &{} 7 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} \cdots &{} 0 &{} 0 &{} 0\\ 9 &{} 3 &{} 10 &{} 6 &{} 1 &{} 3 &{} 0 &{} 0 &{} 0 &{} 0&{} \cdots &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 2 &{} 6 &{} 3 &{} 0 &{} 0 &{} 0&{} \cdots &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 2 &{} 7 &{} 3 &{} 0 &{} 0 &{} \cdots &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 2 &{} 8 &{} 3 &{} 0&{} \cdots &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 2 &{} 9 &{} 3&{} \cdots &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 &{} 2 &{} 10&{} \cdots &{} 0 &{} 0 &{} 0\\ {} &{}\vdots &{}&{}&{}&{}&{}&{}&{}&{}&{}&{}\ddots &{}&{}\vdots \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \cdots &{} 0 &{} 0 &{} 0 &{}0 &{}0&{} 2&{}n-1 &{} 3 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \cdots &{} 0 &{}0 &{} 0 &{} 0 &{} 0 &{}0&{}2&{} n \\ \end{array} \right) . \end{aligned}$$
(6)

The first \(5 \times 5\) sub-matrix of G is a random matrix and the remaining part of G is the same as that in Example 1.

Remark 3

This type of matrix is a generalized model for the case of considering autonomous nonlinear delay differential equation. In this case, one of the independent variables is fixed and add a phase conditions in order to remove the translation invariance of solution with respect to time variables. We refer, for instance, Shinohara [2]. In such a case, of course no random numbers appear. Here, generalizing this situation, we want to consider the case in which a part of matrix G presented in Example 1 received a random perturbation. Of course, this perturbed matrix has a different minimum singular value if a perturbation is different. Here, taking the matrix G given by Eq. (6) as an example, we would like to show that the minimum singular value of the perturbed matrix is almost unchanged if we increase the order of matrix. \(\square \)

Numerical experiments show that the minimum singular value of G is experimentally almost unchanged even if we increase n and given as

$$\begin{aligned} \frac{1}{\sigma _{\text {min}}} \approx 1.53479 \end{aligned}$$
(7)

provided that \(n \geqq 100\). This bound is specific for this matrix, i.e., if we change the random matrix part, then the minimum singular value of G is changed.

2.3 Example 3

Let \(n>10\) be a positive integer. As the third example, let us consider \(G \in M_n(\mathbb {R})\) given by

$$\begin{aligned} G= \left( \begin{array}{lllllllllcc} 1 &{} k &{} k^2&{} k^3 &{} k^4 &{} k^5 &{} k^6 &{} k^7 &{} &{}\cdots \\ k &{} 2 &{} k &{} k^2 &{} k^3 &{} k^4 &{} k^5 &{} k^6 &{}&{} \cdots \\ k^2&{} k&{} 3 &{} { k} &{} k^2 &{} k^3 &{} k^4 &{} k^5 &{}&{}\cdots \\ k^3 &{} k^2&{} k&{} 4 &{} k &{} k^2 &{} k^3 &{} k^4 &{}&{} \cdots \\ k^4 &{}k^3 &{} k^2&{} k &{} 5 &{} k&{} k^2 &{} k^3&{}&{}\cdots \\ k^5&{} k^4 &{}k^3 &{} k^2&{} k &{} 6 &{} k&{} k^2 &{}&{}\cdots \\ k^6 &{} k^5&{} k^4 &{}k^3 &{} k^2&{} k &{} 7 &{} k&{}&{}\cdots \\ k^7&{} k^6 &{} k^5&{} k^4 &{}k^3 &{} k^2&{} k &{} 8&{}&{}\cdots \\ {} &{}&{}\vdots &{}&{}&{}&{}&{}&{}\ddots &{}\\ \ {} &{}&{}\dots &{}&{}&{}&{}&{}&{}&{} n-1 &{}~k\\ &{}&{}\dots &{}&{}&{}&{}&{}&{}&{}k&{} ~n\\ \end{array}\right) \end{aligned}$$
(8)

with \(0.5<k<1\).

Remark 4

This type of off-diagonal elements are coming from smooth non-polynomial nonlinear terms. \(\square \)

In this case, numerical experiments declare that \(1/\sigma _{\text {min}}\) are almost unchanged even if we increase n provided \(n\geqq 100\) as shown in Table 1.

Table 1 Numerically calculated inverse of the minimum singular values (without verification)

2.4 Example 4

The above mentioned examples are simplified models. Back to the nonlinear delay differential equations. As an example, let us consider the periodically time variant Hutchinson equation defined by

$$\begin{aligned} \frac{dx(t)}{dt}-\alpha \left( 1+\beta \sin {\omega t})x(t)(1-\frac{x(t-\tau )}{K}\right) =0 . \end{aligned}$$
(9)

We refer section 10.1 of Ruan’s survey article [3] about this equation. For \(\tau =3, \alpha =0.5,\beta =0.6,\omega =1,K=2\), it has an 1/2 subharmonic solution of \(4\pi \) periodic solution shown in Fig. 1.

Fig. 1
figure 1

Subharmonic Solutions (\(\tau =3, \alpha =0.5,\beta =0.6,\omega =1,K=2\))

For this solution, we have calculated Galerkin’s approximations by the spectral method. At these approximate solutions, we have calculated the minimum singular values \(\sigma _{\text {min}}\) for the coefficient matrices of linearized Galerkin equations. As shown in Table 2, via numerical experiments it is seen that a tight upper bound \(1/\sigma _{\text {min}} \leqq 13\) is valid for \(50 \leqq m \leqq 500\). Here, m is an order of Fourier expansion used in Galerkin’s approximation so that \(2m+1\) is a dimension of Galerkin’s equation.

Table 2 Calculated inverse of the minimum singular value \(\sigma _{\text {min}}\) of the coefficient matrix of the linearized Galerkin equation (without verification)

Using verified numerical computations, based on Theorem 1, which will be presented in Sect. 3, we can show that there exist an exact 1/2-subharmonic solution near the approximate solutions. However, to discuss this, we need the derivation of Galerkin’s equation precisely. It needs long calculations. Thus, this time we will not discuss this example in the rest of this paper and describe it in a separate paper.

Remark 5

This kind of “the minimum singular values invariant-ness” property for coefficient matrices of linearized Galerkin’s equations can be seen experimentally for a wide class of nonlinear delay differential equation with smooth nonlinearity. For example, an another example has been presented in case of analyzing the forced El Niño equation in Ref. [1].

3 Theorem for generalized asymptotic diagonal dominant matrix

To derive lower bounds of minimum singular values of the matrices G’s presented as Examples in Sect. 2, we show the following theorem:

Theorem 1

Let n be a positive integer, and m be a non-negative integer satisfying \(m \leqq n\). Let \(G \in M_n(\mathbb {R})\) be defined by

$$\begin{aligned} G=\left( \begin{array}{cc}A &{} B \\ C &{} D \\ \end{array} \right) \end{aligned}$$
(10)

with \(A \in M_m(\mathbb {R})\), \(B \in M_{m,n-m}(\mathbb {R})\), \(C \in M_{n-m,m}(\mathbb {R})\), and \(D \in M_{n-m,n-m}(\mathbb {R})\). Let \(D_d\) and \(D_{f}\) be the diagonal part and the off-diagonal part of D, respectively. Assume that A and \(D_d\) are invertible. If

$$\begin{aligned} \Vert A^{-1}B\Vert _2<1, ~\Vert CA^{-1}\Vert _2<1, ~\text {and}~\Vert D_d^{-1}(D_f-CA^{-1}B)\Vert _2<1, \end{aligned}$$
(11)

G is invertible and the following estimate holds:

$$\begin{aligned} \Vert G^{-1}\Vert _2 \leqq \frac{\max \left\{ {\Vert A^{-1}\Vert _2,\displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_d^{-1} (D_f-CA^{-1}B)\Vert _2}}}\right\} }{(1-\Vert A^{-1}B\Vert _2)(1-\Vert CA^{-1}\Vert _2)}. \end{aligned}$$
(12)

Proof

From the invertibility of A, we can define the Schur complement with respect to A as

$$\begin{aligned} S_A=D-CA^{-1}B. \end{aligned}$$

From \(\Vert D_d^{-1}(D_f-CA^{-1}B)\Vert _2<1\), it follows that \(S_A\) is invertible. In fact, from the invertibility of \(D_d\), we can define \(D_d^{-1}(S_A-D_d)\). From

$$\begin{aligned} \Vert D_d^{-1}(S_A-D_d)\Vert _2=\Vert D_d^{-1}(D_f-CA^{-1}B)\Vert _2<1, \end{aligned}$$

we can use Banach’s contraction mapping principle to prove \(S_A\) is invertible and

$$\begin{aligned} \Vert S_A^{-1}\Vert _2 \leqq \displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_d^{-1}(D_f-CA^{-1}B)\Vert _2)}}. \end{aligned}$$
(13)

Then, G is known to be invertible and \(G^{-1}\) is given byFootnote 1

$$\begin{aligned} G^{-1}=\left( \begin{array}{cc}I_m &{} -A^{-1}B \\ O_{n-m,m} &{} I_{n-m} \\ \end{array} \right) \left( \begin{array}{cc}A^{-1} &{} O_{m,n-m} \\ O_{n-m,m} &{} S_A^{-1} \\ \end{array} \right) \left( \begin{array}{cc}I_m&{} O_{m,n-m} \\ -CA^{-1} &{} I_{n-m} \\ \end{array} \right) . \end{aligned}$$
(14)

Here, \(I_m\) is the m-th order identity matrix and \(O_{k,l}\) is the \(k \times l\) zero matrix. Thus, we have

$$\begin{aligned} \Vert G^{-1}\Vert _2\leqq \left\| \left( \begin{array}{cc}I_m &{} -A^{-1}B \\ O_{n-m,m} &{} I_{n-m} \\ \end{array} \right) \right\| _2 \left\| \left( \begin{array}{cc}A^{-1} &{} O_{m,n-m} \\ O_{n-m,m} &{} S_A^{-1} \\ \end{array} \right) \right\| _2 \left\| \left( \begin{array}{cc}I_m&{} O_{m,n-m} \\ -CA^{-1} &{} I_{n-m} \\ \end{array} \right) \right\| _2. \end{aligned}$$

It is easy to see that

$$\begin{aligned} \left\| \left( \begin{array}{cc}I_m &{} -A^{-1}B \\ O_{n-m,m} &{} I_{n-m} \\ \end{array} \right) \right\| _2 \leqq \frac{\Vert I_n\Vert _2}{1-\left\| \left( \begin{array}{cc}O_m &{} -A^{-1}B \\ O_{n-m,m} &{} O_{n-m} \\ \end{array} \right) \right\| _2}= \frac{1}{1-\Vert A^{-1}B\Vert _2}. \end{aligned}$$

Similarly, we have

$$\begin{aligned} \left\| \left( \begin{array}{cc}I_m&{} O_{m,n-m} \\ -CA^{-1} &{} I_{n-m} \\ \end{array} \right) \right\| _2 \leqq \frac{1}{1-\Vert CA^{-1}\Vert _2}, \end{aligned}$$

and

$$\begin{aligned} \left\| \left( \begin{array}{cc}A^{-1} &{} O_{m,n-m} \\ O_{n-m,m} &{} S_A^{-1} \\ \end{array} \right) \right\| _2 \leqq \max {\left\{ {\Vert A^{-1}\Vert _2,\displaystyle {\frac{\Vert D_d^{-1} \Vert _2}{1-\Vert D_d^{-1}(D_f-CA^{-1}B)\Vert _2)}}}\right\} }. \end{aligned}$$

This completes the proof. \(\square \)

If \(m=0\), then conditions given by Eq. (11) becomes

$$\begin{aligned} \Vert G_d^{-1}G_f\Vert _2 <1. \end{aligned}$$
(15)

Here, \(G_d\) and \(G_f\) are the diagonal and the off-diagonal parts of G, respectively. The condition given by Eq. (15) expresses, in some sense, diagonal dominant-ness of G. For diagonal dominant matrix theory, we refer Refs. [4] and [5]. Thus, for \(m\geqq 1\), I would like to call the conditions given by Eq. (11) as generalized asymptotic diagonal dominant-ness of G.

Remark 6

It is obvious that Theorem 1 holds for any matrix norms induced from vector norms. Thus, it also holds if \(\Vert \cdot \Vert _2 \) is replaced by \(\Vert \cdot \Vert _{\infty }\) or \(\Vert \cdot \Vert _1\).

Remark 7

Theorem 1 states that for a given \(m\geqq 1\) if

$$\begin{aligned} \max \left\{ {\Vert A^{-1}\Vert _2,\displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_2^{-1}(D_f-CA^{-1}B)\Vert _2}}}\right\} =\Vert A^{-1}\Vert _2, \end{aligned}$$
(16)

and if \(\Vert A^{-1}B\Vert _2\) and \(\Vert CA^{-1}\Vert _2\) are sufficiently small, the bound of \(\Vert G^{-1}\Vert _2\) given by Eq. (12) becomes almost same as \(\Vert A^{-1}\Vert _2\).

Furthermore, for a given \(m\geqq 1\) if Eq. (16) holds for any n greater than or equal to m, and if estimates of upper bounds for \(\Vert A^{-1}B\Vert _2\) and \(\Vert CA^{-1}\Vert _2\) can be derived independently on n, then the bound of \(\Vert G^{-1}\Vert _2\) given by Eq. (12) holds for any n greater than or equal to m.

In the next section, we will show that both facts are true for the matrices G’s presented as Examples in Sect. 2. \(\square \)

4 Lower bounds of minimum singular values for matrices presented as examples in Sect. 2

In this section, it will be shown that lower bounds for the minimum singular values of the matrices G’s given in Sect. 2 can be derived using Theorem 1 presented in Sect. 3. The emphasis will be that these lower bounds of the minimum singular values are reasonably tight if we take m = 100 and unchanged even if orders of matrices G’s are increased.

4.1 Example 1

Let \(n \geqq 10\) be positive integer. Let us consider the matrix \(G \in M_n(\mathbb {R})\) defined by Eq. (1). Let m be a non-negative integer satisfying \(m \leqq n\). Let us divide \(G \in M_n(\mathbb {R})\) as

$$\begin{aligned} G=\left( \begin{array}{cc}A &{} B \\ C &{} D \\ \end{array} \right) \end{aligned}$$
(17)

with \(A \in M_m(\mathbb {R})\), \(B \in M_{m,n-m}(\mathbb {R})\), \(C \in M_{n-m,m}(\mathbb {R})\), and \(D \in M_{n-m,n-m}(\mathbb {R})\). If we take \(m=9\), then

$$\begin{aligned}{} & {} A=\left( \begin{array}{ccccccccc} 1 &{} 3&{} 0 &{} 0 &{} 0&{} 0&{} 0 &{} 0&{} 0\\ 2 &{} 2 &{} 3 &{} 0 &{} 0&{} 0&{} 0 &{} 0&{} 0\\ 0 &{} 2 &{} 3 &{} 3 &{} 0&{} 0&{} 0 &{} 0&{} 0\\ 0 &{} 0 &{} 2 &{} 4 &{} 3 &{} 0&{} 0&{} 0 &{} 0\\ 0 &{} 0 &{} 0&{} 2 &{} 5&{} 3 &{} 0 &{}0 &{}0\\ 0 &{} 0&{} 0&{} 0 &{} 2 &{} 6 &{} 3 &{} 0 &{} 0\\ 0 &{} 0&{} 0&{} 0 &{} 0&{} 2 &{} 7 &{} 3 &{} 0 \\ 0 &{} 0&{} 0&{} 0 &{} 0&{} 0&{} 2 &{} 8 &{} 3 \\ 0 &{} 0&{} 0&{} 0 &{} 0&{} 0&{} 0 &{} 2 &{} 9 \\ \end{array} \right) ,~~~\text {and}~~~ B=\left( \begin{array}{cccc} 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ 3 &{} 0 &{} \cdots &{} 0\\ \end{array} \right) .\\ \end{aligned}$$

Then, using verified numerical computations, we have

$$\begin{aligned} \Vert A^{-1}\Vert _{2} \leqq 3.122. \end{aligned}$$

Remark 8

Nowadays, if m is small, say less than several thousands, using verified numerical computations, we can easily calculate a tight and rigorous upper bound of \(\Vert A^{-1}\Vert _{2} \) via lap-top computers provided that A is not so ill-conditioned. However, if n is large, say more than \(10^{7}\), usually it is difficult to compute \(\Vert G^{-1}\Vert _{2}\) with guaranteed accuracy even via super computers. The typical situation we suppose is that m is small and n is large. In computer assisted existence proof of solutions for nonlinear differential equations, such a situation arises not rarely, see, for instance, Ref. [1]. \(\square \)

If v is the 9th column vector of the matrix \(A^{-1}\), we have

$$\begin{aligned} A^{-1}B = (3v,0_9,0_9,\ldots ,0_9). \end{aligned}$$

Here, \(0_9\) is 9-dimensional zero column vector. Using verified numerical computations, we have

$$\begin{aligned} \Vert A^{-1}B\Vert _2 \leqq 0.458. \end{aligned}$$

Noticing that

$$\begin{aligned} C=\left( \begin{array}{ccccccccc} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 &{} 0 &{} 0\\ {} &{}&{}&{}&{}\vdots \\ \end{array} \right) , \end{aligned}$$

if u is 9th low vector of \(A^{-1}\), we have

$$\begin{aligned} CA^{-1}=\left( \begin{array}{ccccccccc} &{}2u\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ {} &{}\vdots \\ 0 &{} 0 &{} \cdots &{} 0\\ \end{array} \right) . \end{aligned}$$

Thus, using verified numerical computations, it is seen that

$$\begin{aligned} \Vert CA^{-1}\Vert _{2} \leqq 0.26. \end{aligned}$$

From \(D_d=\)diag\((10,11,\ldots , n)\), we have \(\Vert D_d^{-1}\Vert _{\infty }=\Vert \)diag\((10^{-1},11^{-1},\ldots ,n^{-1})\Vert _{\infty }=0.1\).

Let c be 9-9th component of \(A^{-1}\), we have

$$\begin{aligned} CA^{-1}B=\left( \begin{array}{ccccccccc} 6c &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ {} &{}\vdots \\ 0 &{} 0 &{} \cdots &{} 0\\ \end{array} \right) \end{aligned}$$

with \(6c=0.738\ldots \). Then, we have

$$\begin{aligned} \Vert D_f-CA^{-1}B\Vert _2 \leqq \Vert D_f\Vert _2 + \Vert CA^{-1}B\Vert _2 \leqq 5+0.739 \leqq 6, \end{aligned}$$

which implies that \(\Vert D_d^{-1}\Vert _2\Vert (D_f-CA^{-1}B)\Vert _{2}\leqq 0.1\cdot 6= 0.6\). Thus, conditions of Theorem 1 are satisfied.Footnote 2 In this example,

$$\begin{aligned} \max \left\{ {\Vert A^{-1}\Vert _2,\displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_2^{-1}\Vert _2\Vert D_f-CA^{-1}B\Vert _2}}}\right\} =\Vert A^{-1}\Vert _2 \end{aligned}$$

is satisfied. Theorem 1 implies

$$\begin{aligned} \Vert G^{-1}\Vert _{2}\leqq & {} \frac{\max \left\{ {\Vert A^{-1}\Vert _2,\displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_2^{-1}(D_f-CA^{-1}B)\Vert _2}}}\right\} }{(1-\Vert A^{-1}B\Vert _2)(1-\Vert CA^{-1}\Vert _2)}\\\leqq & {} \frac{3.122}{(1-0.458)(1-0.26)} \\ {}\leqq & {} 7.79. \end{aligned}$$

If we further increase m, we have more tight verified bounds as shown in Fig. 2.

Fig. 2
figure 2

Dependency on m of the present upper bounds for \(\Vert G^{-1}\Vert _2\)

These upper bounds of \(\Vert G^{-1}\Vert _2\) are mathematically rigorous and valid for any positive integer n greater than m.

For m = 100, we have a verified bound \(\Vert G^{-1}\Vert _2 \leqq 3.285\), which holds for any \(n > 100\). We think this is reasonably tight compared with numerical experimental value given by Eq. (5) in Sect. 2.1 as \(1/\sigma _{\text {min}} =3.12221\ldots \).

4.2 Example 2

Let \(n \geqq 10\) be a positive integer. Next, let us consider the matrix \(G \in M_n(\mathbb {R})\) defined by Eq. (6). We divide G as Eq. (17) with \(m=9\). Then, we have

$$\begin{aligned}{} & {} A=\left( \begin{array}{ccccccccc} 1 &{} 3 &{} 2 &{} 6 &{} 7 &{} 0 &{} 0 &{} 0 &{} 0 \\ 5 &{} 4 &{} 7 &{} 9 &{} 9 &{} 0 &{} 0 &{} 0 &{} 0 \\ 5 &{} 8 &{} 4 &{} 7 &{} 10 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} 2 &{} 10 &{} 5 &{} 7 &{} 0 &{} 0 &{} 0 &{} 0 \\ 9 &{} 3 &{} 10 &{} 6 &{} 1 &{} 3 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0&{} 0&{} 0 &{} 2 &{} 6 &{} 3 &{} 0 &{} 0\\ 0 &{} 0&{} 0&{} 0 &{} 0&{} 2 &{} 7 &{} 3 &{} 0 \\ 0 &{} 0&{} 0&{} 0 &{} 0&{} 0&{} 2 &{} 8 &{} 3 \\ 0 &{} 0&{} 0&{} 0 &{} 0&{} 0&{} 0 &{} 2 &{} 9 \\ \end{array} \right) ,~\text {and}~ B=\left( \begin{array}{cccc} 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 0 &{} 0 &{} \cdots \\ 3 &{} 0 &{} \cdots \\ \end{array} \right) .\\ \end{aligned}$$

Using verified numerical computations, it is seen

$$\begin{aligned} \Vert A^{-1}\Vert _{2} \leqq 1.54. \end{aligned}$$

If v is the 9th column vector of the matrix \(A^{-1}\), we have

$$\begin{aligned} A^{-1}B = (3v,0_9,0_9,\ldots ,0_9). \end{aligned}$$

Here, \(0_9\) is 9-dimensional aero column vector. Using verified numerical computations, we have

$$\begin{aligned} \Vert A^{-1}B\Vert _2 \leqq 0.42. \end{aligned}$$

Noticing that

$$\begin{aligned} C=\left( \begin{array}{ccccccccc} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 2 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 &{} 0 &{} 0\\ {} &{}&{}&{}&{}\vdots \\ \end{array} \right) , \end{aligned}$$

if u is 9th low vector of \(A^{-1}\), we have

$$\begin{aligned} CA^{-1}=\left( \begin{array}{ccccccccc} &{}2u\\ 0 &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ {} &{}\vdots \\ 0 &{} 0 &{} \cdots &{} 0\\ \end{array} \right) . \end{aligned}$$

Thus, using verified numerical computations, it is seen that

$$\begin{aligned} \Vert CA^{-1}\Vert _{2} \leqq 0.26. \end{aligned}$$

From \(D_d=\)diag\((10,11,\ldots , n)\), we have \(\Vert D_d^{-1}\Vert _{\infty }=\Vert \)diag\((10^{-1},11^{-1},\ldots ,n^{-1})\Vert _{\infty }=0.1\).

Let c be 9-9th component of \(A^{-1}\), we have

$$\begin{aligned} CA^{-1}B=\left( \begin{array}{ccccccccc} 6c &{} 0 &{} \cdots &{} 0\\ 0 &{} 0 &{} \cdots &{} 0\\ {} &{}\vdots \\ 0 &{} 0 &{} \cdots &{} 0\\ \end{array} \right) \end{aligned}$$

with \(6c=0.736\cdots \). Then, we have

$$\begin{aligned} \Vert D_f-CA^{-1}B\Vert _2 \leqq 6, \end{aligned}$$

which implies that \(\Vert D_d^{-1}\Vert _2\Vert (D_f-CA^{-1}B)\Vert _{2}\leqq 0.1\cdot 6= 0.6\). Thus, conditions of Theorem 1 are satisfied. As Example 1, we have

$$\begin{aligned} \max \left\{ {\Vert A^{-1}\Vert _2,\displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_2^{-1}(D_f-CA^{-1}B)\Vert _2}}}\right\} =\Vert A^{-1}\Vert _2. \end{aligned}$$

Theorem 1 implies

$$\begin{aligned} \Vert G^{-1}\Vert _{\infty }\leqq & {} \frac{\max \left\{ {\Vert A^{-1} \Vert _2,\displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_2^{-1} (D_f-CA^{-1}B)\Vert _2}}}\right\} }{(1-\Vert A^{-1}B\Vert _2)(1-\Vert CA^{-1}\Vert _2)}\\\leqq & {} \frac{1.54}{(1-0.42)(1-0.26)} \\\leqq & {} 3.59. \end{aligned}$$

If we further increase m, we have more tight verified bounds as shown in Fig. 3.

Fig. 3
figure 3

Dependency on m of the present upper bounds for \(\Vert G^{-1}\Vert _2\)

These upper bounds of \(\Vert G^{-1}\Vert _2\) are mathematically rigorous and valid for any positive integer n greater than m.

For m = 100, we have a verified bound \(\Vert G^{-1}\Vert _2 \leqq 1.615\), which holds for any \(n > 100\). We think this is reasonably tight compared with numerical experimental value given by Eq. (7) in Subsection 2.2 as \(1/\sigma _{\text {min}} =1.53479\ldots \).

4.3 Example 3

Let \(n \geqq 21\) be a positive integer. Let \(G \in M_n(\mathbb {R})\) be defined by Eq. (8) with \(0.5< k <1\).

If we divide G as Eq. (17) with \(m=20\), we have

$$\begin{aligned} A= \left( \begin{array}{lllllllllcc} 1 &{} k &{} k^2&{} k^3 &{} \cdots \\ k &{} 2 &{} k &{} k^2 &{} \cdots \\ k^2&{} k&{} 3 &{} { k} &{} \cdots \\ {} &{}&{}\vdots &{}\ddots &{}\\ \ {} &{}&{}\dots &{}&{} 19 &{}~k\\ {} &{}&{}\dots &{}&{}k&{} ~20\\ \end{array}\right) , ~\text {and}~ B=\left( \begin{array}{cccccc} k^{20} &{} k^{21} &{} k^{22}\cdots \\ k^{19} &{} k^{20} &{} k^{21}\cdots \\ k^{18} &{} k^{19} &{} k^{20}\cdots \\ \vdots &{}\\ k^2 &{} k^3 &{} k^4\cdots \\ k &{} k^2 &{} k^3\cdots \\ \end{array} \right) . \end{aligned}$$
(18)

Let \(v=(k^{20},k^{19},\cdots ,k^2,k)^t\). We assume \(n-m=pq\) with positive integers p and q. Assume also that p is small, say \(p \leqq 1000\). Put

$$\begin{aligned} B_p=(v,kv,k^2v,\ldots ,k^{p-1}v). \end{aligned}$$

Then, from

$$\begin{aligned} B= & {} [B_p,k^pB_p,\ldots ,k^{p(q-1)}B_p]\\= & {} [B_p,O_{m,p},\ldots ,O_{m,p}]+[O_{m,p},k^pB_p,O_{m,p},\ldots ,O_{m,p}]+\cdots \\ {}{} & {} + [O_{m,p},\ldots ,O_{m,p},k^{p(q-1)}B_p], \end{aligned}$$

it follows

$$\begin{aligned} \Vert A^{-1}B\Vert _2 \leqq \frac{\Vert A^{-1}B_p\Vert _2}{1-k^p}. \end{aligned}$$

Since p is small, we can estimate a tight upper bound of \(\Vert A^{-1}B_p\Vert _2\) rigorously with verified numerical computations. Thus, using these formula and verified numerical computations, we have results shown in Table 3.

Table 3 Verified upper bound of \(\Vert A^{-1}\Vert _{2}\) and \(\Vert A^{-1}B\Vert _{2}\). The bound for \(\Vert A^{-1}B\Vert _{2}\) is valid for any \(n \geqq 20\)

From the symmetry of G, we have \(A^t=A\) and \(C=B^t\) so that the value of \(\Vert CA^{-1}\Vert _{2}\) coincides with \(\Vert A^{-1}B\Vert _{2}\). From \(D_d=\)diag\((21,22,\ldots , n)\), it follows that \(\Vert D_d^{-1}\Vert _{\infty }=\Vert \)diag\((21^{-1},22^{-1},\ldots ,n^{-1})\Vert _{\infty }\leqq 0.048\). Here, we note that

$$\begin{aligned} \Vert D_f-B^tA^{-1}B\Vert _2 \leqq \Vert D_f\Vert _2 +\frac{\Vert B_p^tA^{-1}B_p\Vert _2}{(1-k^p)^2}. \end{aligned}$$
(19)

Then, from \(\Vert D_d^{-1}\Vert _{\infty }\leqq 0.048\) and the bound given by Eq. (19), using verified numerical computations, we have estimates of \(\Vert D_d^{-1}\Vert _2\Vert D_f-CA^{-1}B\Vert _2\) as shown in Table 4.

Table 4 Verified upper bounds of \(\Vert D_d^{-1}\Vert _2\Vert D_f-CA^{-1}B\Vert _2\). These bounds are valid for any \(n \geqq 20\)

Thus, it is seen that conditions of Theorem 1 are satisfied for \(k=0.6, k=0.7, k=0.8,\) and \(k=0.9\). As Examples 1 and 2, we have

$$\begin{aligned} \max \left\{ {\Vert A^{-1}\Vert _2,\displaystyle {\frac{\Vert D_d^{-1}\Vert _2}{1-\Vert D_2^{-1} (D_f-CA^{-1}B)\Vert _2}}}\right\} =\Vert A^{-1}\Vert _2. \end{aligned}$$

Form Theorem 1, we have verified upper bounds for \(\Vert G^{-1}\Vert _{2}\) as Table 5.

Table 5 Upper bounds of \(\Vert G^{-1}\Vert _{2}\). These upper bounds are mathematically rigorous and valid for any positive integer \(n> 20\)

If we increase m, we have more tight bounds as shown in Fig. 4.

Fig. 4
figure 4

Dependency on m of the present upper bounds for \(\Vert G^{-1}\Vert _2\)

Here, in Fig. 4 lines from bottom to top correspond to \(k=0.6, k=0.7, k=0.8,\) and \(k=0.9\), respectively. These upper bounds of \(\Vert G^{-1}\Vert _2\) are mathematically rigorous and valid for any positive integer n greater than m.

For m = 100, we have verified upper bounds for \(\Vert G^{-1}\Vert _2\) as Table 6.

Table 6 Upper bounds of \(\Vert G^{-1}\Vert _{2}\). These upper bounds are mathematically rigorous and valid for any positive integer \(n> 100\)

We think these bounds are reasonably tight compared with numerical experimental values shown as Table 1 in Sect. 2.3.

5 Conclusions

Before presenting conclusions, we note that in this paper, we have used Matlab 2022a for the calculation without verification. For the verified numerical computations, we use VCP library developed by Kouta Sekine.Footnote 3 To handle smoothly various data types in verified computations, Masahide Kashiwagi has developed C++ class library named kv library.Footnote 4 This library is written on the philosophy of policy-based programming. VCP library is written on kv library to enjoy high performance computing technology based on the optimized BLAS such as MKL. Thus, using VCP one can write a high performance verification program with a policy-based programming philosophy. Details are referred to the cited home pages.

Let \(A \in M_n(\mathbb {R})\). For example, for the calculation the verified minimum singular value of A, we have used the following algorithm:

figure a

Here, \(A'\) is the transpose of A. You can get these functions from Sekine’s home page.Footnote 5

Now, conclusions are in order. In this paper, we have presented three classes of matrices modeling coefficients matrices of linearized Galerkin’s equations. These Galerkin’s equations are coming from certain first order nonlinear delay differential equation with smooth nonlinearity. We have shown results of computer experiments. The results exhibit the fact that the minimum singular values of such classes of matrices are unchanged even if orders of matrices are increased.

In Sect. 3, to derive lower bounds of minimum singular values of the matrices G’s presented as Examples in Sects. 2.1, 2.2, and 2.3, we have presented Theorem 1. Theorem 1 is derived based on the Schur complement. We have proposed a concept of a generalized asymptotic diagonal dominant matrix. Then, it is shown that lower bounds of the minimum singular values of G’s can be derived using Theorem 1. These lower bounds become reasonably tight if we take m = 100. Emphasis is that they are unchanged even if orders of matrices G’s are increased. This corresponds to the experimental observation that the minimum singular values of matrices G’s presented in Sect. 2 are unchanged even if orders of matrices are increased.

Wide classes of coefficient matrices, say G’s, of linearized Galerkin’s equations for nonlinear differential equations with smooth nonlinearity have generalized asymptotic diagonal dominant-ness. We think this is a reason why the minimum singular values of G’s are unchanged even if orders of approximations are increased.