Keywords

1 Introduction

The quantum Rabi model (QRM) is one of the basic models in quantum optics, describing the interaction between a two-level atom and a light field. Its Hamiltonian \(H_{\text {Rabi}}\) is given by

$$ H_{\text {Rabi}} = \omega a^{\dagger } a + g (a + a^{\dagger }) \sigma _x + \Delta \sigma _z, $$

where \(a^{\dag }\) and \(a\) are the creation and annihilation operators of the quantum harmonic oscillator, \(\sigma _x,\sigma _z \) are the Pauli matrices

$$ \sigma _x = \begin{bmatrix} 0 &{} 1 \\ 1 &{} 0 \end{bmatrix}, \qquad \sigma _z = \begin{bmatrix} 1 &{} 0 \\ 0 &{} -1 \end{bmatrix}, $$

\(\omega >0\) is the classical frequency of light field (modeled by a quantum harmonic oscillator), \(2 \Delta >0\) is the energy difference of the two-level system and \(g>0\) is the interaction strength between the two systems. In our discussion we have set \(\hbar =1\) with no loss of generality. The QRM has a \(\mathbb {Z}/2 \mathbb {Z}\)-symmetry that allows a decomposition \(H_{\text {Rabi}} = H_{+\Delta } \oplus H_{-\Delta }\) for Hamiltonians \(H_{\pm \Delta }\) acting on appropriate subspaces of the Hilbert space in which \(H_{\text {Rabi}}\) acts. Degeneracies are then found to naturally appear between one eigenvalue of \(H_{+\Delta }\) and one eigenvalue of \(H_{-\Delta }\). The parameters (\(g,\Delta ,\omega \)) of the QRM are classified into parameter regimes according to the static and dynamic properties of the resulting energy levels and their solutions (see Xie et al. 2017 for discussion on parameter regimes).

Recent developments in experimental physics (Maissen et al. 2014, Yoshihara et al. 2017) have managed to realize parameter regimes (including the non-perturbative ultrastrong coupling and the deep strong coupling regimes) where approximated models, such as the Jaynes–Cummings model, can no longer describe the physical properties of the QRM. These developments, along with the prospect of applications to areas such as quantum information technologies (see Haroche and Raimond 2008; Yoshihara et al. 2017) have made the study of the properties of the QRM and its spectrum an important topic in physics. At the same time, there has been interest in the research of the mathematical aspects of the QRM and its generalizations (see, for example, Reyes-Bustos and Wakayama 2017; Sugiyama 2018; Wakayama 2017).

The asymmetric quantum Rabi model (AQRM) is one of these generalizations. The Hamiltonian of the AQRM is obtained by introducing a nontrivial interaction term that breaks the \(\mathbb {Z}/2 \mathbb {Z}\)-symmetry in the Hamiltonian of the QRM. Concretely, its Hamiltonian is given by

$$\begin{aligned} H_{\text {Rabi}}^{\varepsilon } = \omega a^\dag a+\Delta \sigma _z +g\sigma _x(a^\dag +a) + \varepsilon \sigma _x, \end{aligned}$$

with \(\varepsilon \in \mathbb {R}\). In general, this model loses the \(\mathbb {Z}/2 \mathbb {Z}\)-symmetry of the QRM making the presence of degeneracies a nontrivial question and, in particular, there appears to be no way to define invariant subspaces (called parity subspaces in the case of the QRM) whose solutions constitute degeneracies (or crossings).

However, and contrary to this intuition, degenerate states were discovered in numerical experiments for the case \(\varepsilon = \frac{1}{2} \) by Li and Batchelor in (2015). Later, Masato Wakayama in (2017) proved the existence in general for the case \(\varepsilon =\frac{1}{2} \) and conjectured the existence of degenerate states for the general half-integer \(\varepsilon \) case in terms of divisibility of constraint polynomials. The conjecture was recently proved affirmatively for the general case by Kazufumi Kimoto, Masato Wakayama, and the author in 2017. The presence of degenerate solutions for half-integer parameter hints at the possibility of a hidden symmetry in the AQRM, as it has been discussed in Semple and Kollar (2017), Wakayama (2017).

In order to describe how the degeneracies in the spectrum of the AQRM appear, we introduce the constraint polynomials.

Definition 1

Let \(N\in \mathbb {Z}_{\ge 0}\). The polynomials \(P^{(N,\varepsilon )}_{k}(x,y)\) of degree \(k \in \mathbb {Z}_{\ge 0}\) are defined recursively by

$$\begin{aligned} P^{(N,\varepsilon )}_{0}(x,y)&= 1, \qquad P^{(N,\varepsilon )}_{1}(x,y) = x + y - 1 - 2 \varepsilon , \\ P^{(N,\varepsilon )}_{k}(x,y)&= (k x + y - k(k + 2 \varepsilon ) ) P_{k-1}^{(N,\varepsilon )}(x,y) - k(k-1)(N-k+1) x P_{k-2}^{(N,\varepsilon )}(x,y). \end{aligned}$$

The polynomial \(P^{(N,\varepsilon )}_{N}(x,y) \) is called constraint polynomial and its defining property is that if the parameters \(g,\Delta >0 \) satisfy the constraint equation

$$ P^{(N,\varepsilon )}_{N}((2g)^2,\Delta ^2) = 0, $$

then \(\lambda = N + \varepsilon - g^2 \) is an eigenvalue of \(H_{\text {Rabi}}^{\varepsilon }\). Any eigenvalue of the AQRM arising from the zeros of the constraint polynomials in this way is called Juddian eigenvalue.

The original conjecture proposed in Wakayama (2017) is summarized in the following theorem.

Theorem 2

(Kimoto et al. 2017) For \(N, \ell \in \mathbb {Z}_{\ge 0}\), we have

$$\begin{aligned} P^{(N+\ell ,-\frac{\ell }{2})}_{N+\ell }(x,y) = A^{(\ell )}_N(x,y) P^{(N,\frac{\ell }{2})}_{N}(x,y), \end{aligned}$$
(1)

for a polynomial \(A^{(\ell )}_N(x,y) \in \mathbb {Z}[x,y]\). In addition, for \( \ell ,N \in \mathbb {Z}_{\ge 0}\) the polynomial \(A^{(\ell )}_N(x,y)\) has no zeros for \(x,y>0\).

In other words, since the constraint polynomials at both sides of (1) correspond to the same eigenvalue, we see that any Juddian eigenvalue of the AQRM is degenerate when the parameter \(\varepsilon \) is half-integer. The proof of Theorem 2 is done by studying certain determinant expressions satisfied by the constraint polynomials.

In the same paper Wakayama (2017) (see also Reyes-Bustos and Wakayama 2017), a second conjecture was presented. This time the polynomials involved are not the constraint polynomials, but the intermediate polynomials \(P^{(N,\varepsilon )}_{k}(x,y)\). Since these polynomials are also related to solutions of the eigenvalue problem of the QRM, the study of this conjecture may provide some new insight into the relation between solutions of the QRM.

Conjecture 3

(Wakayama 2017) Let \(N, \ell , k \in \mathbb {Z}_{\ge 0}\). There are polynomials \(A_{k}^{(N,\ell )}(x,y)\) and \(B_k^{(N,\ell )}(x,y)\) in \( \mathbb {Z}[x,y]\) such that

$$ P^{(N+\ell ,-\frac{\ell }{2})}_{k+\ell }(x,y) = A_{k}^{(N,\ell )}(x,y) P^{(N,\frac{\ell }{2})}_{k}(x,y) + B_k^{(N,\ell )}(x,y) $$

with \(B_N^{(N,\ell )}(x,y) = B_0^{(N,\ell )}=0\). Furthermore, we have \(A_{k}^{(N,\ell )}(x,y)>0\) for \(x,y>0\).

It is important to notice that the way it was described in Wakayama (2017), the conjecture has not a unique solution. We discuss the issue in Sect. 3 and by extending the divisibility properties of the constraint polynomials, we give a candidate solution to the conjecture above. In addition, we describe the relation of the constraint polynomials with the coefficient solutions of the eigenvalue problem of AQRM in the Bargmann space picture.

Finally, we remark that there have been recent efforts to define regime parameters of the QRM using information from the energy levels of the solutions and not just the dynamic properties (see Rossatto et al. 2017). This approach is based on knowledge on the parameters for which exceptional solutions appear (for instance, the zeros of constraint polynomials). We expect that the results given here for constraint polynomials may provide some further insight for the studies in this direction.

2 The Confluent Picture of the Asymmetric Quantum Rabi Model

In this section we introduce the asymmetric quantum Rabi model (AQRM) and the realization of its eigenvalue problem in the Bargmann space, equivalent to a system of linear confluent Heun differential equations. After that we see that the coefficients of the solutions of the AQRM are expressed in terms of the constraint polynomials and other related polynomials. A good reference for Bargmann space methods is Schweber (1967).

The Bargmann space \(\mathcal {H}_\mathcal {B}\) is the space of complex functions \(f : \mathbb {C}\rightarrow \mathbb {C}\) holomorphic everywhere in the complex plane satisfying

$$ \Vert f \Vert _{\mathcal {B}} = \left( \frac{1}{\pi } \int _\mathbb {C}|f(z)|^2e^{-|z|^2} dx dy \right) ^{1/2} < \infty $$

for \(z = x + i y\) and where \(d x dy \) is the Lebesgue measure in \( \mathbb {C}\simeq \mathbb {R}^2\).

An important property of the Bargmann space is that it contains entire functions \(f\) having asymptotic expansion of the form

$$\begin{aligned} f(z) = e^{\alpha _1 z} z^{-\alpha _0}(c_0 + c_1 z^{-1} + c_2 z^{-2} + \cdots ), \end{aligned}$$
(2)

as \(z \rightarrow \infty \) (see Braak 2011b). In particular, normal solutions of differential equations having an unramified singular point of rank \(2\) at infinity are included.

The Bargmann space \(\mathcal {H}_\mathcal {B}\) is seen to be a Hilbert space unitarily equivalent to \(L^2(\mathbb {R})\) and the realization of the creation and annihilation operators is given by

$$ a \rightarrow \partial _{z}, \qquad a^{\dag } \rightarrow z, $$

where we use \(\partial _{z} \) to denote \(\frac{\partial }{\partial z} \).

Recall that the Hamiltonian \(H_{\text {Rabi}}^{\varepsilon }\) of the AQRM is given by

$$\begin{aligned} H_{\text {Rabi}}^{\varepsilon } = \omega a^\dag a+\Delta \sigma _z +g\sigma _x(a^\dag +a) + \varepsilon \sigma _x. \end{aligned}$$
(3)

Without loss of generality, we set \(\omega = 1\) for the remainder of the paper. Thus, when \(H_{\text {Rabi}}^{\varepsilon }\) is realized as an operator acting on \(\mathcal {H}_\mathcal {B}\otimes \mathbb {C}^2\), the Hamiltonian \(H_{\text {Rabi}}^{\varepsilon }\) is given by

$$ \tilde{H}_{\text {Rabi}}^{\varepsilon } := \begin{bmatrix} z \partial _z + \Delta &{} g(z + \partial _z) + \varepsilon \\ g(z+\partial _z) + \varepsilon &{} z \partial _z - \Delta \end{bmatrix}. $$

Then, the time-independent Schrödinger equation \(H_{\text {Rabi}}^{\varepsilon }\varphi =\lambda \varphi \, (\lambda \in \mathbb {R})\) is equivalent to the system of first-order differential equations

$$\begin{aligned} \tilde{H}_{\text {Rabi}}^{\varepsilon }\psi =\lambda \psi , \quad \psi = \begin{bmatrix} \psi _{1}(z) \\ \psi _{2}(z) \end{bmatrix}, \end{aligned}$$

where eigenfunctions of \(H_{\text {Rabi}}^{\varepsilon }\) associated with a given eigenvalue \(\lambda \in \mathbb {R}\) correspond to solutions \(\psi _i \in \mathcal {H}_\mathcal {B}\), \(i =1,2 \).

The eigenvalue problem of the AQRM is then reduced to finding entire functions \(\psi _1,\psi _2 \in \mathcal {H}_\mathcal {B}\), and real number \( \lambda \) satisfying

$$\begin{aligned} \left\{ \begin{aligned} (z \partial _z + \Delta ) \psi _1 + (g (z + \partial _z) + \varepsilon ) \psi _2&= \lambda \psi _1, \\ (g (z + \partial _z) + \varepsilon ) \psi _1 + (z \partial _z - \Delta ) \psi _2&= \lambda \psi _2. \end{aligned} \right. \end{aligned}$$

Now, by setting \(\phi _\pm = \psi _1 \pm \psi _2 \), we get

$$\begin{aligned} \left\{ \begin{aligned} (z + g) \frac{d}{d z} \phi _+ + (g z + \varepsilon - \lambda ) \phi _+ + \Delta \phi _-&= 0, \\ (z - g) \frac{d}{d z} \phi _- - (g z + \varepsilon + \lambda ) \phi _- + \Delta \phi _+&= 0. \end{aligned} \right. \end{aligned}$$
(4)

We note that the system (4) is equivalent to a second-order confluent Heun differential equation with an (unramified) irregular singular point at \( z= \infty \) in addition to regular singular points at \(z = \pm g \) (c.f. Braak 2016). Therefore, by the discussion above and (2), any entire solution \(\psi \) of (4) is actually \(\psi \in \mathcal {H}_\mathcal {B}\otimes \mathbb {C}^2\). This is a key property used to prove the integrability in Braak (2011a).

Notice also that by applying the substitution \( z \rightarrow -z \), we obtain the alternative system

$$\begin{aligned} \left\{ \begin{aligned} (z + g) \frac{d}{d z} \bar{\phi }_- + (g z + \varepsilon - \lambda ) \bar{\phi }_- + \Delta \bar{\phi }_+&= 0, \\ (z - g) \frac{d}{d z} \bar{\phi }_+ - (g z + \varepsilon + \lambda ) \bar{\phi }_+ + \Delta \bar{\phi }_{-}&= 0 \end{aligned} \right. \end{aligned}$$
(5)

where \( \bar{\phi }_{\pm }(z) = \phi _\pm (-z) \). Furthermore, the two systems are equivalent under the transformation \(\varepsilon \rightarrow - \varepsilon \).

Setting \(x = \lambda + g^2 \), the solutions around the singularity \(z = g \) (for \(x \pm \varepsilon \notin \mathbb {Z}\)) are given by

$$\begin{aligned} \phi _{+}(z) = e^{- g z} \sum _{n=0}^{\infty } \frac{\Delta K_n^-}{x-\varepsilon -n} (z+g)^n, \qquad \phi _{-}(z) = e^{- g z} \sum _{n=0}^{\infty } K_n^- (z+g)^n, \end{aligned}$$
(6)

and by the symmetry mentioned above, the other set of solutions is given by

$$\begin{aligned} \bar{\phi }_{-}(z) = e^{ g z} \sum _{n=0}^{\infty } \frac{\Delta K_n^+}{x-\varepsilon -n} (z+g)^n, \qquad \bar{\phi }_{+}(z) = e^{ g z} \sum _{n=0}^{\infty } K_n^+ (z+g)^n, \end{aligned}$$
(7)

related by \(\phi _{+}(z) = \bar{\phi }_{+}(-z) \) and \(\phi _{-}(z) = \bar{\phi }_{-}(-z) \). For \(n \in \mathbb {Z}_{\ge 0}\), define the functions \(f^{\pm }_n = f^{\pm }_n(x,g,\Delta ,\varepsilon )\) by

$$\begin{aligned} f^{\pm }_n(x,g,\Delta ,\varepsilon ) = 2g + \frac{1}{2g} \Bigl ( n - x \pm \varepsilon + \frac{\Delta ^2}{x -n \pm \varepsilon } \Bigr ). \end{aligned}$$
(8)

The coefficients \(K^{\pm }_n(x)=K^{\pm }_n(x,g, \Delta ,\varepsilon )\) are then given by the recurrence relation

$$\begin{aligned} n K^{\pm }_{n}(x) = f^{\pm }_{n-1}(x,g,\Delta ,\varepsilon ) K^{\pm }_{n-1}(x) - K^{\pm }_{n-2}(x) \quad (n\ge 1) \end{aligned}$$
(9)

with initial condition \(K_{-1}^{\pm } = 0\) and \(K_0^{\pm } = 1\).

The solutions (6) (resp. (7)) in general do not represent entire solutions. The condition for the solutions to be entire is given by the G-function. Next, we recall the definition of the G-function and refer the reader to Braak (2011a, b) for the full details.

Definition 4

The \(G\)-function for the Hamiltonian \(H_{\text {Rabi}}^{\varepsilon }\) is defined as

$$\begin{aligned} G_\varepsilon (x;g,\Delta ) := \Delta ^2\bar{R}^+(x;g,\Delta ,\varepsilon ) \bar{R}^-(x;g,\Delta ,\varepsilon ) - R^+(x;g,\Delta ,\varepsilon )R^-(x;g,\Delta ,\varepsilon ) \end{aligned}$$

where

$$\begin{aligned} R^\pm (x;g,\Delta ,\varepsilon ) = \sum _{n=0}^\infty K^{\pm }_n(x) g^n \quad \text { and } \quad \bar{R}^{\pm }(x;g,\Delta ,\varepsilon ) = \sum _{n=0}^\infty \frac{K^{\pm }_n(x)}{x-n\pm \varepsilon } g^n, \end{aligned}$$
(10)

whenever \(x \mp \varepsilon \not \in \mathbb {Z}_{\ge 0} \), respectively.

The main property of the G-function (see, for example, Braak 2011a) is that for a fixed tuple of parameters \((g,\Delta ,\varepsilon )\), the zeros \(x_n\) of \( G_\varepsilon (x;g,\Delta )\) correspond to eigenvalues \(\lambda _n = x_n-g^2\) of \(H_{\text {Rabi}}^{\varepsilon }\) with \(x_n \ne N \pm \varepsilon \) for any integer \(N \in \mathbb {Z}\). Any such eigenvalue is called a regular eigenvalue of the QRM. More precisely, if \(x\) is a zero of the G-function, the solutions (6) can be analytically continued to the whole plane, and thus constitute solutions of the eigenvalue problem for the given eigenvalue \(\lambda = x - g^2\).

In general, not every eigenvalue of the AQRM is regular. An eigenvalue that is not regular is called exceptional eigenvalue. Equivalently, exceptional eigenvalues are those of the form \(\lambda = N \pm \varepsilon - g^2 \). If the power series in the solution for an exceptional eigenvalue is terminating (i.e., is a polynomial), it is called Juddian, otherwise it is called non-Juddian exceptional eigenvalue. We recall from the introduction that Juddian eigenvalues are those that arise from zeros of the constraint polynomials. We also remark that the exceptional eigenvalues are closely related to the poles of the G-function, and refer the reader to Kimoto et al. (2017), Li and Batchelor (2015) for more information on exceptional eigenvalues.

After the preparations, we relate the coefficients of the solutions (resp. the G-function), with constraint polynomials. For brevity, we set \(c_k^{(\varepsilon )} = k(k+ 2 \varepsilon ) \) and \(\lambda _k = k (k-1)(N-k+1) \). Then the polynomial \(P^{(N,\varepsilon )}_{k}(x,y)\) is the determinant of a \(k\times k\) tridiagonal matrix

$$\begin{aligned} P^{(N,\varepsilon )}_{k}(x,y)=\det (\mathbf {I}_k y+\mathbf {A}^{(N)}_kx+\mathbf {U}^{(\varepsilon )}_k) \end{aligned}$$
(11)

where \(\mathbf {I}_k\) is the identity matrix of size \(k\) and

$$\begin{aligned} \mathbf {A}^{(N)}_k = {{\,\mathrm{tridiag}\,}}\begin{bmatrix}i &{} 0 \\ \lambda _{i+1}\end{bmatrix}_{1\le i\le k},\quad \mathbf {U}^{(\varepsilon )}_k={{\,\mathrm{tridiag}\,}}\begin{bmatrix}-c^{(\varepsilon )}_i &{} 1 \\ 0\end{bmatrix}_{1\le i\le k}, \end{aligned}$$

where we use the notation

$$\begin{aligned} {{\,\mathrm{tridiag}\,}}\begin{bmatrix}a_i &{} b_i \\ c_i\end{bmatrix}_{1\le i\le n} :=\begin{bmatrix} a_1 &{} b_1 &{} 0 &{} 0 &{} \cdots &{} 0 \\ c_1 &{} a_2 &{} b_2 &{} 0 &{} \cdots &{} 0\\ 0 &{} c_2 &{} a_3 &{} b_3 &{} \cdots &{} 0 \\ \vdots &{} \ddots &{} \ddots &{} \ddots &{} \ddots &{} \vdots \\ 0 &{} \cdots &{} 0 &{} c_{n-2} &{} a_{n-1} &{} b_{n-1} \\ 0 &{} \cdots &{} 0 &{} 0 &{} c_{n-1} &{} a_n \end{bmatrix}. \end{aligned}$$

The relation between the Nth coefficient of the G-function and the constraint polynomials is seen in the next lemma.

Lemma 5

(Kimoto et al. 2017) Let \(N \in \mathbb {Z}_{\ge 0}\). For \(g>0\), the relation

$$\begin{aligned} (N!)^2 (2g)^{N} K^{-}_N(N+\varepsilon ;g,\Delta ,\varepsilon ) = P^{(N,\varepsilon )}_{N}((2g)^2,\Delta ^2)\ \end{aligned}$$
(12)

holds. In addition, if \(\varepsilon = \ell /2 \, (\ell \in \mathbb {Z})\), it also holds that

$$\begin{aligned} ((N+\ell )!)^2 (2g)^{N+\ell } K^{+}_{N+\ell }(N+\ell /2;g,\Delta ,\ell /2) = P^{(N+\ell ,-\ell /2)}_{N+\ell }((2g)^2,\Delta ^2). \end{aligned}$$

From this point of view, the constraint polynomials are multiples of the coefficients of the solutions of the associated equation system of differential equations for \(x = N+\varepsilon \). This fact is important since it allows us to relate the residues at the poles of the G-function with the presence or absence of exceptional solutions (see Kimoto et al. 2017, Propositions 5.3, 5.5 and 5.6).

We proceed to generalize the result above to all the coefficients of the G-function. First, we note a simple but important relation between the coefficients \(K_n^-(N+\varepsilon ;g,\Delta ,\varepsilon )\) and \(K_n^-(n+\varepsilon ;g,\Delta ,\varepsilon )\) of the G-functions and the corresponding relation between constraint polynomials.

Lemma 6

For \(N,n\in \mathbb {Z}_{\ge 0}\) with \( n \le N\),

$$ K_n^-(N+\varepsilon ;g,\Delta ,\varepsilon ) = K_n^-(n+\varepsilon ;g,\Delta ,\varepsilon ) + q_0(g,\Delta ,\varepsilon ,n,N), $$

where \((2g)^n q_0(g,\Delta ,\varepsilon ,n,N) \in \mathbb {Z}[g,\Delta ,\varepsilon ,n,N] \) and

$$ q_0(g,\Delta ,\varepsilon ,N,N) = q_0(g,\Delta ,\varepsilon ,n,n) = 0. $$

Moreover,

$$ P^{(N,\varepsilon )}_{k}(x,y) = P^{(k,\varepsilon )}_{k}(x,y) + \bar{q}_0(g,\Delta ,\varepsilon ,n,N), $$

where \( \bar{q}_0(g,\Delta ,\varepsilon ,n,N) \in \mathbb {Z}[g,\Delta ,\varepsilon ,n,N] \) and \(\bar{q}_0(g,\Delta ,\varepsilon ,N,N) = \bar{q}_0(g,\Delta ,\varepsilon ,n,n) = 0\).

Proof

We give the proof for the polynomials \(P^{(N,\varepsilon )}_{k}(x,y)\) as the proof for the coefficients \(K_n^-(N+\varepsilon ;g,\Delta ,\varepsilon ) \) is done in a completely analogous way. In the determinant expression (11) for \(P^{(N,\varepsilon )}_{k}(x,y)\), in each term \(\lambda _i = i (i-1)(N-i+1) \), we write \(N = k + (N-k) \) and then factor out the terms including \(N-k \) by the multilinearity of the determinant. This gives the result.\(\square \)

Next, we relate the coefficients of the solutions at \(\bar{x} = N + \varepsilon \) with the constraint polynomials \(P^{(n,\varepsilon )}_{n}(x,y)\). In the lemma below, for \(a \in \mathbb {C}\) and \(n \in \mathbb {Z}_{\ge 0} \), \((a)_n= a (a+1)\cdots (a+n-1)\) is the Pochhammer symbol.

Lemma 7

For \(N,n\in \mathbb {Z}_{\ge 0}\) with \( n \le N\), we have

$$ n! (N-n+1)_n (2g)^n K_n^-(N+\varepsilon ;g,\Delta ,\varepsilon ) = P^{(n,\varepsilon )}_{n}((2g)^2,\Delta ^2) + q_1(x,y;N,n,\varepsilon ), $$

with \(q_1(x,y;N,n,\varepsilon ) \in \mathbb {Z}[x,y,N,n,\varepsilon ]\) such that \(q_1(x,y;N,N,\varepsilon ) = q_1(x,y;n,n,\varepsilon ) = 0\).

Proof

For \(n \le N\), define the auxiliary polynomials \(P^{(N,n,\varepsilon )}_k(x,y) \) by the three-term recurrence relation

$$\begin{aligned} P^{(N,n,\varepsilon )}_k(x,y) =&( (N-n+k)x + y - (N-n+k)^2 - 2(N-n+k) \varepsilon ) P^{(N,n,\varepsilon )}_{k-1}(x,y) \nonumber \\&- (N-n+k)(N-n+k-1)(n-k+1) x P_{k-2}^{(N,n,\varepsilon )}, \end{aligned}$$
(13)

with initial conditions \( P^{(N,n,\varepsilon )}_0(x,y)=1 \) and

$$ P^{(N,n,\varepsilon )}_1(x,y) = (N-n+1)x + y - (N-n+1)^2 - 2(N-n+1)\varepsilon . $$

Note that setting \(n=N \) gives \(P^{(N,N,\varepsilon )}_k(x,y) = P^{(N,\varepsilon )}_{k}(x,y) \).

Next, the determinant form (or continuant) of the three-term recurrence relation for the coefficients \(K_{n}^-(x;g,\Delta ,\varepsilon )\) is given by

$$\begin{aligned} K_{n}^-(x;g,\Delta ,\varepsilon )&= \frac{1}{n!} \det \begin{pmatrix} f_{n-1}^-(x) &{} 1 &{} 0 &{} \cdots &{} 0 &{} 0 \\ n-1 &{} f_{n-2}^-(x) &{} 1 &{} \cdots &{} 0 &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \cdots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 1 &{} f_0^-(x) \end{pmatrix}, \end{aligned}$$

where we factored \(\frac{1}{k}\) from each of the rows. Next, we see that

$$\begin{aligned} f_{k}^{-}(N+\varepsilon )&= 2 g + \frac{1}{2 g} \left( k - N - 2\varepsilon + \frac{\Delta ^2}{N-k} \right) \\&= \frac{1}{(2 g)(N-k)} \left( (2g)^2(N-k) - (N-k)^2 - 2\varepsilon (N-k) + \Delta ^2 \right) \\&= \frac{1}{(2 g)(N-k)} h(k,g,\Delta ), \end{aligned}$$

with \(h(k,g,\Delta ) \) defined implicitly. Thus, we obtain the expression

$$\begin{aligned}&K_{n}^-(N+\varepsilon ;g,\Delta ,\varepsilon ) = \frac{1}{n!(2g)^2 (N-n+1)_n} \\&\qquad \quad \times {{\,\mathrm{tridiag}\,}}\begin{bmatrix}h(n-i,g,\Delta ) &{} \,\, (2g)^2 (N-n+i)(N-n+i+1)(n-i) \\ 1\end{bmatrix}_{1 \le i \le n}, \end{aligned}$$

and we verify that the three-term recurrence relation corresponding to this determinant is exactly the one defining the polynomials \(P^{(N,n,\varepsilon )}_k(x,y) \) above, with \(x = (2g)^2 \) and \(y = \Delta ^2 \). Thus, we have proved that

$$ n! (N-n+1)_n (2g)^n K_n^-(N+\varepsilon ;g,\Delta ,\varepsilon ) = P^{(N,n,\varepsilon )}_n((2g)^2,\Delta ^2). $$

The result then follows by factoring out the elements containing \(N-n\) from the determinant associated with the three-term recurrence relation (13).\(\square \)

From Lemmas 6 and 7, we immediately have the following Corollary giving several expressions for the coefficients in terms of the polynomials \(P^{(N,\varepsilon )}_{n}(x,y)\).

Corollary 8

For \(N,n\in \mathbb {Z}_{\ge 0}\) with \( n \le N\), we have

$$ P^{(N,\varepsilon )}_{n}((2 g)^2,\Delta ^2) = (n!)^2 (2 g)^n K_n^{-}(N + \varepsilon ; g, \Delta ,\varepsilon ) + q_2(g^2,\Delta ^2,n,N), $$

where \(q_2(g^2,\Delta ^2,n,N) \in \mathbb {Z}[g^2,\Delta ^2,N,n,\varepsilon ]\) such that

$$ q_2(g^2,\Delta ^2,n,n) = q_2(g^2,\Delta ^2,N,N)= 0. $$

Furthermore, we have

$$ P^{(N,\varepsilon )}_{n}((2 g)^2,\Delta ^2) = (n!)^2 (2 g)^n K_n^{-}(n + \varepsilon ; g, \Delta ,\varepsilon ) + \bar{q}_2(g^2,\Delta ^2,n,N), $$

with \(\bar{q}_2(g^2,\Delta ^2,n,N) \) satisfying the same properties as \(q_2(g^2,\Delta ^2,n,N)\)

Using the results above, we can give an expression of the solutions of the confluent picture of the AQRM in terms of constraint polynomials. To see this, we notice that for \(n \in \mathbb {Z}_{\ge 0} \), the following identity holds

$$\begin{aligned} P^{(x,\varepsilon )}_{n}((2 g)^2,\Delta ^2) = (n!) (x-n+1)_n (2 g)^n K_n^{-}(x+\varepsilon ; g, \Delta ,\varepsilon ) + (x-n)q_n(g^2,\Delta ^2,x), \end{aligned}$$
(14)

where \(x \notin \mathbb {Z}_{\ge 0} \) and \(q_n(g^2,\Delta ^2,x) \) is a polynomial with integer coefficients.

Next, we see that the solutions (6), (7) or the functions \(R^\pm \), \(\bar{R}^{\pm }\) appearing in the definition of the G-function can be expressed in terms of constraint polynomials. For instance, we have

$$ R^{-}(x+\varepsilon ;g,\Delta ,\varepsilon ) = \sum _{n=0}^{\infty } \frac{P^{(x,\varepsilon )}_{n}((2 g)^2,\Delta ^2)}{(n!) (x-n+1)_n (2 g)^n} + \sum _{n=0}^{\infty } \frac{(x-n)q_n(g^2,\Delta ^2,x)}{(n!) (x-n+1)_n (2 g)^n}. $$

From this expression (and the corresponding ones for \(R^+\), \(\bar{R}^{\pm } \)) it is possible to give an alternate method for computing the residues at the poles of the G-function to the one in Kimoto et al. (2017).

3 Extended Divisibility Properties for Constraint and Related Polynomials

In this section we return to Conjecture 3, originally presented in Wakayama (2017) (see also Reyes-Bustos and Wakayama 2017). As mentioned in the introduction, in its current form, the conjecture may not have a unique solution. Indeed, let \(A_k^{(N,\varepsilon )}(x,y),B_k^{(N,\varepsilon )}(x,y)\) and \(\bar{A}_k^{(N,\varepsilon )}(x,y), \bar{B}_k^{(N,\varepsilon )}(x,y) \) be two pairs of polynomials satisfying the conditions of the conjecture. Moreover, if the coefficients of \( \frac{1}{2} \left( A_k^{(N,\varepsilon )}(x,y) + \bar{A}_k^{(N,\varepsilon )}(x,y) \right) \) and \( \frac{1}{2} \left( B_k^{(N,\varepsilon )}(x,y) + \bar{B}_k^{(N,\varepsilon )}(x,y) \right) \) are integers, then these polynomials also satisfy the conditions of the conjecture as long as the polynomial \( \frac{1}{2} \left( A_k^{(N,\varepsilon )}(x,y) + \bar{A}_k^{(N,\varepsilon )}(x,y) \right) \) has the positivity condition.

To get a better understanding of the divisibility structure, we extend some of the results given in Kimoto et al. (2017) and give a proposal for a solution of the conjecture that is compatible with the case of the constraint polynomials. In particular, we show how to obtain a family of solutions to the conjecture by using a method related to the one discussed above.

First, we recall a simple lemma on diagonalization that we use in the proofs below.

Lemma 9

(Kimoto et al. 2017) For \(1 \le k \le N\), the eigenvalues of \(\varvec{\mathrm {A}}^{(N)}_k\) are \(\{1,2,\ldots ,k\}\) and the eigenvectors are given by the columns of the lower triangular matrix \( \varvec{\mathrm {E}}^{(N)}_k \) given by

$$ (\varvec{\mathrm {E}}^{(N)}_k)_{i,j} = (-1)^{i-j}\left( {\begin{array}{c}i\\ j\end{array}}\right) \frac{(i-1)!(N-j)!}{(j-1)!(N-i)!}, $$

for \( 1 \le i,j \le k \).

Proof

We have to check that \((\varvec{\mathrm {A}}^{(N)}_k\varvec{\mathrm {E}}^{(N)}_k)_{i,j}=j(\varvec{\mathrm {E}}^{(N)}_k)_{i,j}\) for every ij. By definition, we see that

$$\begin{aligned} (\varvec{\mathrm {A}}^{(N)}_k\varvec{\mathrm {E}}^{(N)}_k)_{i,j}=j(\varvec{\mathrm {E}}^{(N)}_k)_{i,j}&\iff (j-i)(\varvec{\mathrm {E}}^{(N)}_k)_{i,j}=\lambda _i(\varvec{\mathrm {E}}^{(N)}_k)_{i-1,j} \\&\iff (j-i)\left( {\begin{array}{c}i\\ j\end{array}}\right) =-i\left( {\begin{array}{c}i-1\\ j\end{array}}\right) , \end{aligned}$$

and the last equality is easily verified.\(\square \)

Next, we see that in general the polynomials \(P^{(N,\varepsilon )}_{k}(x,y) \) are expressed as the determinant of a tridiagonal matrix plus a rank-one matrix.

Proposition 10

Let \(k\in \mathbb {Z}_{\ge 0}\), then

$$ P^{(N,\varepsilon )}_{k}(x,y) = \det \left( \mathbf {I}_k y + \mathbf {D}_k x + \mathbf {C}_k^{(N,\varepsilon )} + \mathbf {e}_k {}^{T}\mathbf {u}^{(N)}_k \right) , $$

where \(\mathbf {I}_k\) is the identity matrix, \(\mathbf {D}_k = {{\,\mathrm{diag}\,}}(1,2,\ldots ,k)\), and \(\mathbf {C}_k^{(N,\varepsilon )}\) is the tridiagonal matrix given by

$$ \mathbf {C}_k^{(N,\varepsilon )} = {{\,\mathrm{tridiag}\,}}\begin{bmatrix} - i(2(N-i) + 1 + 2\varepsilon ) &{} \phantom {0}1 \\ i(i+1)c_{N-i}^{(\varepsilon )}\end{bmatrix}_{1 \le i \le k}, $$

\(\mathbf {e}_k \in \mathbb {R}^k\) is the \(k\)th standard basis vector, and \(\mathbf {u}^{(N)}_k \in \mathbb {R}^{k}\) is given entrywise by

$$\begin{aligned} \left( \mathbf {u}^{(N)}_k \right) _j = (-1)^{k-j+2} \left( {\begin{array}{c}k+1\\ j\end{array}}\right) \frac{k! (N-j)!}{(j-1)!(N-k-1)!} \,. \end{aligned}$$

Proof

By Lemma 9, the eigenvalues of \(\mathbf {A}^{(N)}_k\) are \(\{1,2,\ldots ,k\}\) and the eigenvectors are given by the columns of the lower triangular matrix \( \mathbf {E}^{(N)}_k \) given by

$$ (\mathbf {E}^{(N)}_k)_{i,j} = (-1)^{i-j}\left( {\begin{array}{c}i\\ j\end{array}}\right) \frac{(i-1)!(N-j)!}{(j-1)!(N-i)!}. $$

Then, it suffices to verify that

$$\begin{aligned} \mathbf {U}^{(\varepsilon )}_k \mathbf {E}^{(N)}_k = \mathbf {E}^{(N)}_k \mathbf {C}^{(N,\varepsilon )}_k + \mathbf {E}^{(N)}_k \mathbf {e}_k {}^{T}\mathbf {u}^{(N)}_k. \end{aligned}$$
(15)

Note that the \(k\)th column of \(\mathbf {E}^{(N)}_k\) is \( \mathbf {e}_k\), therefore the last summand reduces to \(\mathbf {e}_k {}^{T}\mathbf {u}^{(N)}_k\).

For \(i,j \le k\), set

$$ d_{ij}=(-1)^{i-j}\left( {\begin{array}{c}i\\ j\end{array}}\right) \frac{(i-1)!(N-j)!}{(j-1)!(N-i)!}, $$

then, by using the elementary identities

$$\begin{aligned} j(j+1)c^{(\varepsilon )}_{N-j}d_{i,j+1}=-(i-j)(N-j+2\varepsilon )d_{ij}, \end{aligned}$$
$$\begin{aligned} d_{i+1,j}-d_{i,j-1}=(i^2+j^2+ij-j-iN-jN)d_{ij}, \end{aligned}$$

we see that

$$\begin{aligned} -c^{(\varepsilon )}_id_{ij}+d_{i+1,j} +j(2(N-j)+1+2\varepsilon )d_{ij}-d_{i,j-1}-j(j+1)c^{(\varepsilon )}_{N-j}d_{i,j+1} = 0. \end{aligned}$$
(16)

For \(i,j \le k \), we have \(d_{ij} = (\mathbf {E_k^{(N,\varepsilon )}})_{i,j} \) and (16) directly gives (15) for \(1 \le j \le k \) and \( 1 \le i \le k-1\). For \(i = k \), equation (16) reads

$$ (\mathbf {U}^{(\varepsilon )}_k \mathbf {E}^{(N)}_k - \mathbf {E}^{(N)}_k \mathbf {C}^{(N,\varepsilon )}_k )_{k,j} = -d_{k+1,j}, $$

and the right-hand side is equal to the \(i\)th entry of \(\mathbf {u}^{(N)}_k \), as desired.\(\square \)

Note that when \(k = N \), by the definition of the entries, the vector \(\mathbf {u}^{(N)}_k\) is equal to the zero vector, and the proposition above reduces to Proposition 4.2 of Kimoto et al. (2017).

Corollary 11

Let \(k\in \mathbb {Z}_{\ge 0}\), then

$$ P^{(N,\varepsilon )}_{k}(x,y) = \det \left( \mathbf {I}_k y + \mathbf {D}_k x + \mathbf {C}_k^{(N,\varepsilon )} \right) + R_k^{(N,\varepsilon )}(x,y), $$

for a polynomial \(R_k^{(N,\varepsilon )} \in \mathbb {R}[x,y]\) with \(R_N^{(N,\varepsilon )}(x,y)=0 \).

Note that the polynomial \(R_k^{(N,\varepsilon )} \) satisfies the condition expected to be satisfied by the polynomial \(B_k^{(N,\ell )}(x,y) \) of the conjecture. Moreover, the polynomials described by the determinant expression of a tridiagonal matrix

$$ \det \left( \mathbf {I}_k y + \mathbf {D}_k x + \mathbf {C}_k^{(N,\varepsilon )} \right) $$

are exactly the polynomials \(Q_k^{(N,\varepsilon )}(x,y)\) of Remark 3.6 of Kimoto et al. (2017).

Proof

It is well-known that if \(\mathbf {A}\) is a square matrix, then

$$ \det (\mathbf {A} + \mathbf {v} {}^T \mathbf {u}^{(N)}_k ) = \det (\mathbf {A}) + {}^T \mathbf {v} {{\,\mathrm{adj}\,}}(\mathbf {A}) \mathbf {u}^{(N)}_k, $$

where \({{\,\mathrm{adj}\,}}(A) \) is the adjugate matrix, the transpose of the matrix of cofactors of \(A\). Applying this result along with Proposition 10, we get the determinant expression. Furthermore, we see that

$$ R_k^{(N,\varepsilon )}(x,y) = {}^T \mathbf {e}_k {{\,\mathrm{adj}\,}}\left( \mathbf {I}_k y + \mathbf {D}_k x + \mathbf {C}_k^{(N,\varepsilon )} \right) \mathbf {u}^{(N)}_k $$

is a polynomial, since \(\det \left( \mathbf {I}_k y + \mathbf {D}_k x + \mathbf {C}_k^{(N,\varepsilon )} \right) \) is clearly a polynomial. As mentioned above, \(\mathbf {u}^{(N)}_k= 0 \) when \(N=k\), and thus the second claim follows.\(\square \)

Remark 12

The polynomial \(R_k^{(N,\varepsilon )}(x,y) \) is given explicitly by

$$ R_k^{(N,\varepsilon )}(x,y) = -\sum _{j=0}^{k-1} (-1)^{k-j} \left( {\begin{array}{c}k+1\\ j+1\end{array}}\right) \frac{k!(N-(j+1))!}{j!(N-(k+1))!} P^{(N,\varepsilon )}_{j}(x,y). $$

In particular, this expression can be interpreted as the Fourier expansion of the polynomial \(R_k^{(N,\varepsilon )}(x,y)\) with respect to the family of generalized orthogonal polynomials \(\left\{ P^{(N,\varepsilon )}_{k}(x,y)\right\} _{k\ge 0} \) (compare with Remark 7.2 in Kimoto et al. 2017). Here, generalized orthogonal polynomials (with respect to the variable y) are used in the sense of Brezinski (1980).

It also follows that

$$\begin{aligned} Q_k^{(N,\varepsilon )}(x,y) = \sum _{j=0}^{k} (-1)^{k-j} \left( {\begin{array}{c}k+1\\ j+1\end{array}}\right) \frac{k!(N-(j+1))!}{j!(N-(k+1))!} P^{(N,\varepsilon )}_{j}(x,y), \end{aligned}$$
(17)

and since \(Q_k^{(N,\varepsilon )}(x,y)\) are polynomials given by the determinant of a tridiagonal matrix, we immediately see that the right-hand side of (17) satisfy the three-term recurrence relation

$$\begin{aligned} Q^{(N,\varepsilon )}_k(x,y) =&(k x + y - k (2(N+1-k) -1 + 2\varepsilon )) Q^{(N,\varepsilon )}_{k-1}(x,y) \\&- k (k-1)(N+1-k)(N+1-k+2\varepsilon ) Q^{(N,\varepsilon )}_{k-2}(x,y), \end{aligned}$$

which should be contrasted with Definition 1.

We note one more interesting consequence of equation (17). Setting vectors

$$\begin{aligned} {}^{T}\textit{\textbf{P}}^{(N,\varepsilon )}_k(x,y)&= (P^{(N,\varepsilon )}_{0}(x,y),P^{(N,\varepsilon )}_{1}(x,y),\ldots ,P^{(N,\varepsilon )}_{k-1}(x,y)) \\ {}^{T}\textit{\textbf{Q}}^{(N,\varepsilon )}_k(x,y)&= (Q^{(N,\varepsilon )}_0(x,y),Q^{(N,\varepsilon )}_1(x,y), \ldots ,Q^{(N,\varepsilon )}_{k-1}(x,y)), \end{aligned}$$

we verify that

$$ \textit{\textbf{Q}}^{(N,\varepsilon )}_k(x,y) = \varvec{\mathrm {E}}^{(N)}_k \textit{\textbf{P}}^{(N,\varepsilon )}_k(x,y), $$

where \(\varvec{\mathrm {E}}^{(N)}_k\) is the matrix of Lemma 9. These identities and the relation with orthogonal polynomials are part of a forthcoming paper by the author Reyes-Bustos (2019).

For completeness, we note the case \(k=N\) of the corollary above, which reduces to the result given in Kimoto et al. (2017), is used to show, among other things, that for a fixed \(x \in \mathbb {R}\) (resp. \(y \in \mathbb {R}\)) all the roots with respect to \(y\) (resp. \(x\)) of the constraint polynomial \(P^{(N,\varepsilon )}_{N}(x,y)\) are real when \(\varepsilon > -1/2 \) (see Theorem 3.6 of Kimoto et al. 2017).

Corollary 13

Let \( N \in \mathbb {Z}_{\ge 0}\). We have

$$ P^{(N,\varepsilon )}_{N}(x,y) = \det \left( \varvec{\mathrm {I}}_N y + \varvec{\mathrm {D}}_N x + \varvec{\mathrm {S}}_N^{(N,\varepsilon )} \right) , $$

where \(\varvec{\mathrm {D}}_N\) is the diagonal matrix of Proposition 10 and \(\varvec{\mathrm {S}}_N^{(N,\varepsilon )}\) is the symmetric matrix given by

$$ \varvec{\mathrm {S}}_N^{(N,\varepsilon )} = {{\,\mathrm{tridiag}\,}}\begin{bmatrix}-i(2(N-i)+1+2\varepsilon ) &{} \sqrt{i(i+1)c^{(\varepsilon )}_{N-i}} \\ \sqrt{i(i+1)c^{(\varepsilon )}_{N-i}}\end{bmatrix}_{1\le i\le N}. $$

Proof

Consider the case \(k = N \) in Proposition 10. Notice that the matrices \(\varvec{\mathrm {I}}_N y + \varvec{\mathrm {D}}_N x + \varvec{\mathrm {C}}^{(N,\varepsilon )}_N\) and \(\varvec{\mathrm {I}}_N y + \varvec{\mathrm {D}}_N x + \varvec{\mathrm {S}}_N^{(N,\varepsilon )} \) are tridiagonal. By comparing the off diagonal elements, we see that the two determinants are equal.\(\square \)

Similar to the case \(N=k\), when the parameter \(\varepsilon \) is half-integer, we have special divisibility properties for the polynomials \(P^{(N,\varepsilon )}_{k}(x,y)\) obtained by factoring the determinant expression.

Proposition 14

Let \(\ell ,k \in \mathbb {Z}_{\ge 0}\), then

$$ P^{(N+\ell ,-\frac{\ell + N - k}{2})}_{k+\ell }(x,y) = \bar{A}_{k}^{(N,\ell )}(x,y) P^{(N,\frac{\ell + N - k}{2})}_{k}(x,y) + \bar{B}_k^{(N,\ell )}(x,y) $$

with \(\bar{B}^{(N,\ell )}_N(x,y) = 0 \). Moreover, the polynomial \(\bar{A}_{k}^{(N,\ell )}(x,y)\) is given by

$$ \bar{A}_{k}^{(N,\ell )}(x,y) = \frac{(k+\ell )!}{k!} \det {{\,\mathrm{tridiag}\,}}\begin{bmatrix}x + \frac{y}{k+i} + 2 i - 1 +k - N - \ell &{} \phantom {0}1 \\ c_{-i}^{(\frac{N+\ell -k}{2})}\end{bmatrix}_{1 \le i \le \ell }. $$

As can be easily seen from the definition, and as we have already considered above in (14), the variable \(N\) in the constraint polynomial can be taken to assume real values, in other words, we can assume that it is a free variable. In this way, this result, along with Theorem 16 below, can be interpreted as divisibility modulo \(N-k\), that is,

$$ P^{(N+\ell ,-\frac{\ell + N - k}{2})}_{k+\ell }(x,y) \equiv \bar{A}_{k}^{(N,\ell )}(x,y) P^{(N,\frac{\ell + N - k}{2})}_{k}(x,y) \pmod {N-k}. $$

We make this assumption in the remainder of this section to simplify the proofs.

Proof

We begin with the determinant expression of Corollary 11 for the polynomial \(P^{(N+\ell ,-\frac{\ell + N - k}{2})}_{k+\ell }(x,y) \), that is

$$ P^{(N+\ell ,-\frac{\ell + N - k}{2} )}_{k+\ell }(x,y) = \det \left( \mathbf {I}_{k+\ell } y + \mathbf {D}_{k+\ell } x + \mathbf {C}_{k+\ell }^{(N+\ell ,-\frac{\ell + N - k}{2})} \right) + q_{k+\ell }(x,y), $$

where \(q_{k+\ell }(x,y)\) is a polynomial divisible by \(N-k\). The tridiagonal matrix \(\mathbf {C}_{k+\ell }^{(N+\ell ,-\frac{\ell + N - k}{2})} \) is given by

$$ \mathbf {C}_{k+\ell }^{(N+\ell ,-\frac{\ell + N - k}{2})} = {{\,\mathrm{tridiag}\,}}\begin{bmatrix} - i( - 2 i + 1 +\ell + N + k) &{} \phantom {0}1 \\ i(i+1)(N +\ell -i)(k-i)\end{bmatrix}_{1 \le i \le k+\ell }. $$

Note that when \(i = k\), the off-diagonal element \(i(i+1)(N +\ell -i)(k-i) \) vanishes and \(\det \left( \mathbf {I}_{k+\ell } y + \mathbf {D}_{k+\ell } x + \mathbf {C}_{k+\ell }^{(N+\ell ,-\frac{\ell + N - k}{2})} \right) \) can be computed as the product of the determinant of a \(k \times k \) matrix and the determinant of an \(\ell \times \ell \) matrix.

Let us first consider the determinant of the \(\ell \times \ell \)-matrix factor. It is given by

$$ \det {{\,\mathrm{tridiag}\,}}\begin{bmatrix}y + (k+i)x -(k+i)(-2(k+i) + 1 +\ell + N + k) &{} \phantom {0}1 \\ (k+i)(k+i+1)(N+\ell -k-i)(-i) \end{bmatrix}_{1 \le i \le \ell } $$

which is easily seen to be equal to

$$ \bar{A}_{k}^{(N,\ell )}(x,y) = \frac{(k+\ell )!}{k!} \det {{\,\mathrm{tridiag}\,}}\begin{bmatrix}x + \frac{y}{k+i} + 2 i - 1 + k -N -\ell ) &{} \phantom {0}1 \\ c_{-i}^{(\frac{N+\ell -k}{2})} \end{bmatrix}_{1 \le i \le \ell }. $$

Let us denote by \(q(x,y;N,\ell ,k)\) the remaining factor, that is,

$$ q(x,y;N,\ell ,k) = \det {{\,\mathrm{tridiag}\,}}\begin{bmatrix}i x + y -i (-2 i + 1 + \ell + N + k) &{} \phantom {0}1 \\ i(i+1)(N+\ell -i)(k-i)\end{bmatrix}_{1 \le i \le k}. $$

By Corollary 11, we have

$$\begin{aligned} P^{(N,\frac{\ell + N - k}{2})}_{k}(x,y) -&R_k^{(N,\frac{\ell + N - k}{2})} \\&= \det {{\,\mathrm{tridiag}\,}}\begin{bmatrix}i x + y -i (3 N - 2 i + 1 + \ell - k) &{} \phantom {0}1 \\ i(i+1)(N-i)(2 N - i + \ell -k)\end{bmatrix}_{1 \le i \le k}, \end{aligned}$$

the right-hand side can be written as

$$ \det {{\,\mathrm{tridiag}\,}}\begin{bmatrix}i x + y -i (- 2 i + 1 + \ell + N + k + 2 (N-k) ) &{} \phantom {0}1 \\ i(i+1)(k-i + (N-k))(N + \ell - i + (N-k))\end{bmatrix}_{1 \le i \le k}, $$

and noticing that entrywise, the entries of the matrix of the determinant differ to those in the determinant expression of \(q(x,y;N,\ell ,k)\) only by factors of \(N-k\), we obtain

$$ q(x,y;N,\ell ,k) = P^{(N,\frac{\ell + N - k}{2})}_{k}(x,y) + q'(x,y;N,\ell ,k) $$

for a polynomial \(q'(x,y;N,\ell ,k)\) satisfying \(q'(x,y;N,\ell ,N)=0\). This completes the proof.\(\square \)

In order to consider the result for the desired parameter \(\varepsilon = \ell /2\), we need the following lemma.

Lemma 15

Let \(k \in \mathbb {Z}_{\ge 0} \) and \(\delta \in \mathbb {R}\). Then, we have

$$ P^{(N,\varepsilon + \delta )}_{k}(x,y) = P^{(N,\varepsilon )}_{k}(x,y) + 2 \delta q_k^{(N,\varepsilon )}(x,y) $$

for some polynomial \(q^{(N,\varepsilon )}(x,y) \in \mathbb {R}[x,y]\).

Proof

It is clear that \(q_0^{(N,\varepsilon )}(x,y)=0\) and \(q_1^{(N,\varepsilon )}(x,y)=1 \). Then, assume that it holds for all \(i \le k \) for some \(k \in \mathbb {Z}_{\ge 0}\). We have,

$$\begin{aligned} P^{(N,\varepsilon +a)}_{k}(x,y)&= (k x + y - c_k^{(\varepsilon +a)}) P^{(N,\varepsilon + a)}_{k-1}(x,y) - \lambda _k x P^{((N,\varepsilon +a))}_{k-2}(x,y) \\&= P^{(N,\varepsilon )}_{k}(x,y) - 2 k a P^{(N,\varepsilon )}_{k-1}(x,y) + 2 a (k x + y - c_k^{(\varepsilon +a)}) q^{(N,\varepsilon )}_{k-1} \\&\qquad - 2 a \lambda _k x \, q_{k-2}^{(N,\varepsilon )}(x,y) \\&= P^{(N,\varepsilon )}_{k}(x,y) + 2 a q_{k}^{(N,\varepsilon )}(x,y) \end{aligned}$$

and the result follows by induction.\(\square \)

Finally, we give a particular solution to Conjecture 3.

Theorem 16

Let \(\ell ,k \in \mathbb {Z}_{\ge 0}\), then

$$ P^{(N+\ell ,-\frac{\ell }{2})}_{k+\ell }(x,y) = A_{k}^{(N,\ell )}(x,y) P^{(N,\frac{\ell }{2})}_{k}(x,y) + B_k^{(N,\ell )}(x,y) $$

with \(B^{(N,\ell )}_N(x,y) = 0 \). Moreover, the polynomial \(A_{k}^{(N,\ell )}(x,y)\) is given by

$$ A_{k}^{(N,\ell )}(x,y) = \frac{(k+\ell )!}{k!} \det {{\,\mathrm{tridiag}\,}}\begin{bmatrix}x + \frac{y}{k+i} + 2 i - 1 - \ell &{} \phantom {0}1 \\ c_{-i}^{(\frac{\ell }{2})}\end{bmatrix}_{1 \le i \le \ell }. $$

Note that the polynomial \(A_{k}^{(N,\ell )}(x,y)\) does not depend on the parameter \(N\). Because of this, positivity follows trivially from the result for the polynomials \(A_k^{(\ell )}(x,y) \) given in Kimoto et al. (2017). That is, we have \(A_{k}^{(N,\ell )}(x,y)>0\) for \(x,y>0\).

Proof

First, by using Lemma 15 above on the polynomials at both sides of Proposition 14, it is easy to see that

$$ P^{(N+\ell ,-\frac{\ell }{2})}_{k+\ell }(x,y) = \bar{A}_{k}^{(N,\ell )}(x,y) P^{(N,\frac{\ell }{2})}_{k}(x,y) + \bar{C}_k^{(N,\ell )}(x,y) $$

for some polynomial \(\bar{C}_k^{(N,\ell )}(x,y)\) satisfying \(\bar{C}_N^{(N,\ell )}(x,y)=0\). Note that the matrices in the determinant expressions of \(\bar{A}_{k}^{(N,\ell )}(x,y)\) and \(A_{k}^{(N,\ell )}(x,y)\) differ entrywise at most by factor of \(N-k \), therefore

$$ A_{k}^{(N,\ell )}(x,y) = \bar{A}_{k}^{(N,\ell )}(x,y) + (N-k) q^{(N,\ell )}(x,y) $$

for some polynomial \(q^{(N,\ell )}(x,y) \in \mathbb {Z}[x,y]\) completing the proof.\(\square \)

It is important to mention that Theorem 16 may be proved by defining directly

$$ B_k^{(N,\ell )}(x,y) = P^{(N+\ell ,-\frac{\ell }{2})}_{k+\ell }(x,y) - A_{k}^{(\ell )}(x,y) P^{(N,\frac{\ell }{2})}_{k}(x,y), $$

and appealing to the results of Kimoto et al. (2017). However, in the proof above we wanted to emphasize how the polynomial \(A_{k}^{(\ell )}(x,y)\) appears naturally by extending the main results of Kimoto et al. (2017).

Let us now return to the discussion on Conjecture 3 started at the beginning of the section. For an arbitrary (nonzero) polynomial \(p(x,y)\), by setting

$$ \hat{A}_{k}^{(\ell )}(x,y) = A_{k}^{(\ell )}(x,y) + k( N-k) p(x,y) $$

we verify the relation

$$ P^{(N+\ell ,-\frac{\ell }{2})}_{k+\ell }(x,y) = \hat{A}_{k}^{(N,\ell )}(x,y) P^{(N,\frac{\ell }{2})}_{k}(x,y) + \hat{B}_k^{(N,\ell )}(x,y), $$

with

$$ \hat{B}_k^{(N,\ell )}(x,y) = B_k^{(N,\ell )}(x,y) - k( N-k) p(x,y) P^{(N,\frac{\ell }{2})}_{k}(x,y), $$

giving another solution to the conjecture as long as

$$ \hat{A}_{k}^{(\ell )}(x,y) >0 $$

for \(x,y>0\) and \(0 \le k \le N\). Therefore, this method gives a family of solutions of the conjecture related to the particular solution \(A_{k}^{(\ell )}(x,y) \). It would be desirable to consider the problem of characterizing all the solutions to the problem posed in Conjecture 3 or in other words to consider the problem of finding the solutions with minimal degree for \(\hat{B}_k^{(N,\ell )}(x,y)\) (or \(\hat{A}_{k}^{(\ell )}(x,y)\)) while retaining the condition of positivity of \(\hat{A}_{k}^{(\ell )}(x,y)\). We note that the method for showing the positivity of the polynomial \(A_{k}^{(\ell )}(x,y)\) in Kimoto et al. (2017) cannot be extended in general to the polynomial \(\hat{A}_{k}^{(\ell )}(x,y)\) described here.

As a conclusion, we leave the question of Conjecture 3 open, but change the problem from one of existence to one of characterization of solutions according to the discussion above.

Problem 17

Characterize all pairs of solutions \(A_{k}^{(N,\ell )}(x,y)\) and \(B_k^{(N,\ell )}(x,y)\) of Conjecture 3. Alternatively, describe the “minimal” solutions according to certain criteria (e.g., degree).

4 Open Problems

To complement Problem 17, in this section we describe some open problems related with constraint polynomials and Juddian solutions of the AQRM and the QRM.

4.1 Number of Exceptional Solutions of the AQRM

For fixed \(\Delta > 0 \) and \(N \in \mathbb {Z}_{\ge 0}\), the number of values of \(g >0 \) such that \(\lambda = N \pm \varepsilon - g^2\) is a Juddian solution is, by the results in Li and Batchelor (2015) (see also Kimoto et al. 2017), exactly \(N-k\), where \(k\) is the integer satisfying

$$ k(k+ 2\varepsilon ) \le \Delta ^2 < (k+1)(k+1+2\varepsilon ). $$

This gives a complete answer to the problem of counting the number of Juddian solutions for fixed \(\Delta \) when g is allowed to vary. From the G-functions for non-Juddian exceptional eigenvalues (called T-function in Kimoto et al. 2017), it is not difficult to obtain a condition on \(\Delta \) for the existence solution for non-Juddian exceptional solutions for the case of the QRM, but such an estimate provides no information on the exact number of non-Juddian exceptional solutions and no further results in this direction are known.

A different problem in the same line is to determine, for a fixed \(g,\Delta > 0\), the number of exceptional solutions present in the spectrum of \(H_{\text {Rabi}}^{\varepsilon }\). For the case of Juddian eigenvalues, it corresponds to finding all the \(N \in \mathbb {Z}_{\ge 0}\) such that

$$ P^{(N,\varepsilon )}_{N}((2g)^2,\Delta ^2) = 0, $$

for a given \(g,\Delta > 0 \). We recall here that since the polynomials \(P^{(N,\varepsilon )}_{N}((2g)^2,\Delta ^2)\) do not constitute a family of orthogonal polynomials in the usual sense (i.e., with respect to the variables \(x=(2g)^2\) or \(y=\Delta ^2\)) with the exception of the case \(\Delta = 0 \), there is almost no information known about the relation between their zeros. The same problem can be posed for non-Juddian exceptional eigenvalues but as in the Juddian case, there are no results in this direction.

4.2 Classification of Parameter Regimes

The parameter regimes for the QRM are defined according to different observed properties of the QRM, specially its dynamic properties, and whether the model can be approximated by simpler models (like the Jaynes–Cummings models). However, as remarked in Rossatto et al. (2017), the characterization of the coupling regimes is not universally agreed and there is a need for a more specific criterion.

In the same paper, the authors give a new proposal for characterization on the coupling regimes of the QRM that depends not only on the parameters of the system but also on the energy levels of the system. This new classification is based on the study of approximate exceptional solutions of eigenvalue problem of the QRM. The new classification has the advantage of giving precise differentiation between the coupling regimes based on observations made by the authors on the statical and dynamical properties of the QRM in these regimes.

For instance, in this proposal the perturbative ultrastrong coupling regime (pUSC) roughly corresponds to combinations of parameters \(g,\omega ,\Delta \) and eigenvalues \(\lambda \) lying to the left of the first Juddian solution in the spectral curve graph. The perturbative deep strong coupling regime (pDSC) is similarly defined by the combination of parameters \(g,\omega ,\Delta \) and eigenvalues \(\lambda \) lying past a boundary curve (in the \((\lambda ,g)\)-plane) after the last Juddian solution (or the first non-Juddian solution). The non-perturbative ultrastrong-deep strong coupling regime (npUSC-DSC) would then correspond to the remaining region in the \((\lambda ,g)\)-plane.

Thus, it is important to estimate the parameters corresponding to the first and last Juddian solution for each level \(N\), and also the first non-Juddian exceptional solution for the level \(N\), in order to describe the boundaries between the parameter regimes in an effective way. In a more general sense, it would be interesting to have an estimate for the distribution of the zeros of constraint polynomials and constraint functions for non-Juddian exceptional eigenvalues.