1 Introduction

The Calogero–Moser–Sutherland (CMS) systems form an important class of integrable systems with deep relations to geometry, algebra, combinatorics, and other areas. The original works of Calogero, Sutherland and Moser dealt with Hamiltonians describing a pairwise interaction of systems of particles on a circle or a line [2, 20, 28]. This case was related with the root system \(A_n\) by Olshanetsky and Perelomov, who extended CMS Hamiltonians to the case of any root system [21]. In the quantum case, these Hamiltonians are closely related to the radial parts of Laplace–Beltrami operators on symmetric spaces [1, 22].

Ruijsenaars and Schneider introduced a relativistic version of the CMS system in the classical case in [26]. The quantum Hamiltonian and its integrals, which are difference operators, were studied by Ruijsenaars in [23]. For an arbitrary (reduced) root system, the corresponding commuting operators were introduced by Macdonald [19].

Ruijsenaars established a duality relation between the classical (trigonometric) CMS system and its (rational) relativistic version which essentially swaps action and angle variables of the two systems [24]. He also conjectured a duality relation in the quantum case [25]. In this case, there exists a function \(\psi \) of two sets of variables x and z such that

$$\begin{aligned} \begin{aligned} L\psi&= \lambda \psi , \\ M\psi&= \mu \psi , \end{aligned} \end{aligned}$$
(1.1)

where \(L = L(x, \partial _x)\) is the CMS Hamiltonian or its quantum integral, M is Ruijsenaars’ relativistic CMS operator acting in the variables z, and \(\lambda = \lambda (z)\), \(\mu = \mu (x)\) are the corresponding eigenvalues. The relations (1.1) may be viewed as a higher-dimensional differential-difference version of the one-dimensional differential-differential bispectrality relation studied by Duistermaat and Grünbaum [11].

The bispectrality relation (1.1) for the CMS Hamiltonian L related to any root system and for the corresponding Macdonald–Ruijsenaars operators M was established by Chalykh in [4]. In the case of integer coupling parameters, the function \(\psi \) can be taken to be a multi-dimensional Baker–Akhiezer (BA) function [6, 8, 10]. It has the form

$$\begin{aligned} \psi (z, x) = P e^{\langle x, z \rangle }, \end{aligned}$$
(1.2)

where P is a polynomial in z which also depends on x, and \(\psi \) can be characterised by its properties as a function of the variables z. The case of general coupling parameters is deduced by some analytic continuation in parameters and related arguments [4]. The relation (1.1) for the root system \(A_n\) and another function \(\psi \) given by integral formulas was established recently by Kharchev and Khoroshkin in [17]. We refer to [16] and references therein for aspects of this and various other dualities in the broad realm of integrable systems.

In general, the eigenfunctions of quantum integrable systems may be complicated. However, the multi-dimensional BA functions turn out to be manageable algebraic functions. An axiomatic definition of BA functions was formulated by Chalykh, Veselov and Styrkas for an arbitrary finite collection of non-collinear vectors with integer multiplicities in [6, 8] (see also [13] for a weaker version of the function; we also refer to [18] for one-dimensional BA functions). The key properties are the quasi-invariant conditions of the form

$$\begin{aligned} \psi (z+ s \alpha , x) = \psi (z - s\alpha , x) \end{aligned}$$

which are satisfied at \(\langle \alpha , z \rangle =0\) for any vector \(\alpha \) from the configuration, and s takes special values.

It was shown that the BA function exists for very special configurations only, which includes positive subsystems of reduced root systems [8]. With a slight modification, one can also formulate an axiomatic definition of the BA function in the case of the (only) non-reduced root system \(BC_n\) [4].

The BA function is known to exist for certain generalisations of CMS systems related to special configurations of vectors which are not root systems. The known examples are deformations \(A_{n,1}(m)\), \(C_{n}(m,l)\) of the root systems \(A_n\) and \(C_n\), respectively, found by Chalykh, Veselov and one of the authors in [9, 10]; and another deformation \(A_{n,2}(m)\) of the root system \(A_{n+1}\) [7], which satisfies a weaker axiomatics [13]. The BA function for the configuration \(A_{n,1}(m)\) was constructed in [4], and for the cases \(C_n(m,l)\), \(A_{n,2}(m)\) it was constructed in [13]. It was also shown in the papers [4, 13] that the bispectrality relation (1.1) holds for the corresponding generalisations of Macdonald–Ruijsenaars operators. An important property of these operators is the preservation of a space of quasi-invariant analytic functions.

In the process of classification of algebraically integrable two-dimensional generalised CMS operators, the configuration of vectors \(AG_2\) emerged in the work of Fairley and one of the authors [12]. We studied the corresponding generalised CMS operator in [15]. There we found its eigenfunction \(\psi \) of the form (1.2) as \(\psi = {{\mathcal {D}}} \psi _0\), where \(\psi _0\) is the BA function for the root system \(G_2\) and \({{\mathcal {D}}}\) is an explicit differential operator of order three.

In this paper, we study the function \(\psi \) further. We describe it as a BA function by finding the conditions in the variables z which this function satisfies. We find a pair of explicit difference operators which have \(\psi \) as their common eigenfunction. Thus we establish bispectral duality for the generalised CMS system of type \(AG_2\) with integer coupling parameters. We also obtain an expression for the function \(\psi \) by iterations of the action of the difference operators, in analogy with formulas from the works [4, 13] for other configurations.

Let us now introduce more precisely the generalised CMS operators and the configuration of vectors \(AG_2\). Let \(\langle \cdot , \cdot \rangle \) denote the standard Euclidean inner product on \({\mathbb {R}}^n\). We consider the complexification of the real Euclidean space \({\mathbb {R}}^n\) and extend \(\langle \cdot , \cdot \rangle \) bilinearly. Let \(x = (x_1, \dots , x_n) \in {\mathbb {C}}^n\).

Let \(A \subset {\mathbb {C}}^n\) be a finite collection of non-isotropic vectors. A multiplicity map is a function \(m:A \rightarrow {\mathbb {C}}\), \(\alpha \mapsto m_\alpha = m(\alpha )\). One calls \(m_\alpha \) the multiplicity of \(\alpha \). One denotes such collections of vectors with multiplicities by \({\mathcal {A}}= (A, m)\), or just A if the intended multiplicity map is clear from the context. The corresponding generalised CMS operator has the form

$$\begin{aligned} L = -\Delta + \sum _{\alpha \in A} g_\alpha \sinh ^{-2} \langle \alpha , x \rangle , \end{aligned}$$
(1.3)

where \(\Delta = \sum _{i=1}^n \partial _{x_i}^2\) is the Laplacian on \({\mathbb {C}}^n\) and we have the convention of writing the constants \(g_\alpha \) as

$$\begin{aligned} g_\alpha = m_\alpha (m_\alpha + 2m_{2\alpha }+ 1) \langle \alpha , \alpha \rangle \end{aligned}$$
(1.4)

with \(m_{2\alpha } = 0\) if \(2\alpha \notin A\). The convention (1.4) is used in symmetric spaces theory (see e.g. [27]).

The Olshanetsky–Perelomov operators correspond to the case when A is a positive half of a root system and the multiplicity function m is Weyl-invariant. In that case, if \(m_\alpha \in {\mathbb {N}}\) for all \(\alpha \) then the operator (1.3) admits additional integrals and is algebraically integrable (see [6, 8]).

The configuration of vectors \(AG_2\) is a non-reduced configuration in \({\mathbb {C}}^2\) obtained as a union of the root systems \(G_2\) and \(A_2\). A positive half \(AG_{2,+}\) is shown in Fig. 1, where \({G_{2, +}} = \{\alpha _i, \beta _i:i=1, 2, 3\}\), and \({A_{2, +}}=\{2\beta _i:i=1, 2, 3\}\). There are in total 18 vectors in the configuration \(AG_2\). The subscripts on the \(\alpha _i\)’s are assigned in such a way that \( \langle \alpha _i, \beta _i \rangle = 0\) for all \(i = 1, 2, 3\). The ratio of the squared lengths of the long roots \(\alpha _i\) relative to the short roots \(\beta _i\) from the root system \(G_2\) is \(\alpha _i^2/\beta _i^2 = 3\) for all \(i=1, 2, 3\). The vectors \(\alpha _i \), \(\beta _i\) and \(2 \beta _i\) are assigned respectively the multiplicities m, 3m and 1, where \(m \in {\mathbb {C}}\) is a parameter.

We adopt a coordinate system where the vectors take the form

$$\begin{aligned} \alpha _1 = \omega \big (0, \sqrt{3} \big ), \quad \alpha _2 = \omega \left( {-}\tfrac{3}{2}, \tfrac{\sqrt{3}}{2} \right) , \quad \alpha _3 = \omega \left( \tfrac{3}{2}, \tfrac{\sqrt{3}}{2} \right) \end{aligned}$$

and

$$\begin{aligned} \beta _1 = \omega \big (1, 0 \big ),\quad \beta _2 = \omega \left( \tfrac{{1}}{2}, \tfrac{\sqrt{3}}{2} \right) , \quad \beta _3 = \omega \left( {-} \tfrac{{1}}{2}, \tfrac{\sqrt{3}}{2} \right) \end{aligned}$$

for some scaling \(\omega \in {\mathbb {C}}^\times \).

Fig. 1
figure 1

A positive half of the configuration \(AG_2\) [15]

We have \(\beta _1 + \beta _3 = \beta _2\), \(\alpha _2 + \alpha _3 = \alpha _1\), and also

$$\begin{aligned} \begin{aligned}&\beta _1 = 2 \beta _2 - \alpha _1 = \alpha _1 - 2 \beta _3 = \alpha _3 - \beta _2 = \beta _3 - \alpha _2, \\&\alpha _1 = \tfrac{3}{2} \beta _2 + \tfrac{1}{2}\alpha _2 = \tfrac{3}{2}\beta _3 + \tfrac{1}{2}\alpha _3, \text { and } \beta _1 = \tfrac{1}{2} \beta _2 - \tfrac{1}{2} \alpha _2 = -\tfrac{1}{2} \beta _3 + \tfrac{1}{2} \alpha _3. \end{aligned} \end{aligned}$$

The configuration \(AG_2\) is contained in the lattice \({\mathbb {Z}}\beta _1 \oplus {\mathbb {Z}}\alpha _2 \subset {\mathbb {C}}^2\). It is invariant under the Weyl group of type \(G_2\), but it is not a crystallographic root system because, for instance, the vectors \(\beta _1\) and \(2\beta _2\) have \( 2 \langle \beta _1, 2\beta _2 \rangle /\langle 2\beta _2, 2\beta _2 \rangle = \frac{1}{2} \notin {\mathbb {Z}}\).

The corresponding generalised CMS quantum Hamiltonian (1.3) with \(A=AG_{2,+}\) has recently been shown to be quantum integrable for any value of the parameter \(m \in {\mathbb {C}}\) [15]. It is moreover algebraically integrable for \(m \in {\mathbb {N}}\) by virtue of \(AG_2\) being a locus configuration [12], as a consequence of the general results presented in [5] (see also [15]). In this paper, we continue investigating the generalised CMS operator for \(AG_2\) in the special case when \(m \in {\mathbb {N}}\).

The structure of the paper is as follows. In Sect. 2, we recall the notion of a multi-dimensional BA function associated with a configuration \({{\mathcal {A}}}\) following [6, 8], and we generalise it to a case when A has proportional vectors. We show that if such a function exists then it is an eigenfunction for the generalised CMS operator (1.3). In Sect. 3, we give a general ansatz for the dual generalised Macdonald–Ruijsenaars difference operators with rational coefficients, and we find sufficient conditions for these operators to preserve a space of quasi-invariant analytic functions. In Sect. 4, we find a difference operator \({{{\mathcal {D}}}}_1\) related to the configuration \(AG_2\) which satisfies the conditions from Sect. 3. In Sect. 5, we use this operator to show that the BA function for the configuration \(AG_2\) exists, and we express this function by iterated action of the operator. We also show that the BA function is an eigenfunction of the operator \({{{\mathcal {D}}}}_1\), thus establishing bispectral duality. In Sect. 6, we present another dual operator \({{{\mathcal {D}}}}_2\) for the configuration \(AG_2\), and we establish the corresponding statements for this operator analogous to the ones from Sects. 4 and 5.

In Sect. 7, we consider the operator \({\mathcal D}_1\) from Sect. 4 at \(m=0\), which gives a Macdonald–Ruijsenaars operator for the root system \(A_2\) with multiplicity 1. We also consider a version of this operator for the root system \(A_1\) and decompose it into a sum of two non-symmetric commuting difference operators. We relate these operators with the standard Macdonald–Ruijsenaars operator for the minuscule weight of the root system \(A_1\).

2 Baker–Akhiezer Functions

We propose a modified definition of multi-dimensional BA functions so as to extend the definitions from papers [4, 6, 8] to include the configuration \(AG_2\). We formulate the definition in such a way that it naturally generalises the case of reduced root systems as well as the case of the root system \(BC_n\).

Let \(R \subset {\mathbb {C}}^n\) be a (possibly non-reduced) finite collection of non-isotropic vectors. We assume there is a subset \(R_+\subset R\) such that any collinear vectors in \(R_+\) are of the form \(\alpha \), \(2\alpha \) and \(R=R_+\sqcup (-R_+)\). Let \(R^r = \{\alpha \in R:\frac{1}{2}\alpha \notin R \}\) and \(R^r_+ = R^r \cap R_+\). Let \(m:R\rightarrow {\mathbb {Z}}_{\ge 0}\) be a multiplicity map with \(m(R^r) \subset {\mathbb {N}}\). We extend it to a map \(m:R \cup 2R^r \rightarrow {\mathbb {Z}}_{\ge 0}\) by putting \(m_{2\alpha } = 0\) if \(2\alpha \notin R\), \(\alpha \in R^r\).

Definition 2.1

A function \(\psi (z, x)\) \((z,x \in {\mathbb {C}}^n)\) is a BA function for (Rm) if

  1. 1.

    \(\psi (z,x) = P(z, x) \exp \langle z, x \rangle \) for some polynomial P in z with highest order term \(\prod _{\gamma \in R_+} \langle \gamma , z \rangle ^{m_\gamma }\);

  2. 2.

    \(\psi (z + s\alpha , x) = \psi (z-s\alpha , x)\) at \(\langle z, \alpha \rangle = 0\) for \( s = 1, 2, 3, \dots , m_\alpha , m_\alpha +2, \dots , m_\alpha + 2m_{2\alpha }\) for all \(\alpha \in R^r_+\).

The multiplicities of the vectors in \(AG_2\) are \(m_{\beta _i} = 3m\), \(m_{2\beta _i} = 1\), \(m_{\alpha _i} = m\), and we put \(m_{2\alpha _i} = 0\) for all \(i=1, 2, 3\), where \(m \in {\mathbb {N}}\). When we apply Definition 2.1 to this configuration \(R = AG_2\), we get the notion of a BA function for \(AG_2\).

For \(\gamma \in {\mathbb {C}}^n\), we denote by \(\delta _\gamma \) the operator that acts on functions f(zx) by

$$\begin{aligned} \delta _\gamma f(z,x) = f(z+\gamma ,x) - f(z-\gamma ,x). \end{aligned}$$

The condition 2 in Definition 2.1 admits the following equivalent characterisation.

Lemma 2.2

Let \(\alpha \in {\mathbb {C}}^n\) be non-isotropic, \(m_\alpha \in {\mathbb {N}}\), and \(m_{2\alpha } \in {\mathbb {Z}}_{\ge 0}\). A function \(\psi (z, x)\) \((z, x \in {\mathbb {C}}^n)\) analytic in z satisfies \(\psi (z + s \alpha ,x) = \psi (z - s\alpha ,x) \text { at } \langle z, \alpha \rangle = 0 \text { for } s = 1, 2, 3, \dots , m_{\alpha }, m_{\alpha }+2, \dots , m_{\alpha } + 2m_{2\alpha }\) if and only if

$$\begin{aligned} \bigg ( \delta _{\alpha } \circ \frac{1}{\langle z, \alpha \rangle } \bigg )^{s-1} \delta _{\alpha } \psi (z, x) = 0 \text { at } \langle z, \alpha \rangle = 0, s = 1, \dots , m_{\alpha } \end{aligned}$$
(2.1)

and

$$\begin{aligned} \bigg ( \delta _{2\alpha } \circ \frac{1}{\langle z, \alpha \rangle } \bigg )^{t}\circ \bigg ( \delta _{\alpha } \circ \frac{1}{\langle z, \alpha \rangle } \bigg )^{m_\alpha -1} \delta _{\alpha } \psi (z, x) = 0 \text { at } \langle z, \alpha \rangle = 0, t = 1, \dots , m_{2\alpha }.\nonumber \\ \end{aligned}$$
(2.2)

The proof will follow from the one-dimensional considerations in Lemma 2.4 below (see also [8] where the corresponding statements in the case \(m_{2\alpha }=0\) were stated). Let \(\delta _r\) (\(r \in {\mathbb {N}}\)) be the difference operator that acts on functions F(k) (\(k \in {\mathbb {C}}\)) by \(\delta _r F(k) = F(k+r) - F(k - r)\). Write \(\delta = \delta _1\) for short. The next Lemma will be used in the proof of Lemma 2.4.

Lemma 2.3

Suppose \(\delta _r F = k {{{\widehat{F}}}}\) for analytic functions F(k) and \({{{\widehat{F}}}}(k)\). Suppose also that \(F(s-r)=F(r-s)\) for some s. Then \({{{\widehat{F}}}}(s) = {{{\widehat{F}}}}(-s)\) if and only if \(F (s+r) = F (-s-r)\).

Proof

The statement follows by taking \(k=s\) and \(k=-s\) in the equality \(\delta _r F = k {{{\widehat{F}}}}\). \(\square \)

Lemma 2.4

The following two properties are equivalent for any analytic function F(k) \((k \in {\mathbb {C}})\) and any \(n\in {\mathbb {N}}\), \(m \in {\mathbb {Z}}_{\ge 0}\).

  1. 1.

    For all \(s = 1, \dots , n,\)

    $$\begin{aligned} \bigg ( \delta \circ \frac{1}{k} \bigg )^{s-1} \delta F(k) \bigg |_{k=0} = 0, \end{aligned}$$
    (2.3)

    and for all \(t = 1, \dots , m,\)

    $$\begin{aligned} \bigg ( \delta _2 \circ \frac{1}{k} \bigg )^{t} \circ \bigg ( \delta \circ \frac{1}{k} \bigg )^{n-1} \delta F(k) \bigg |_{k=0} = 0. \end{aligned}$$
    (2.4)
  2. 2.

    \(F(s) = F(-s)\) for all \(s = 1, 2, 3, \dots , n, n+2, \dots , n+2m\).

Proof

Let us firstly prove by induction on n that conditions (2.3) are equivalent to the existence of analytic functions \(G_1, \ldots , G_n\) such that \(\delta G_q (k) = k G_{q+1}(k)\) for \(0\le q \le n-1\), \(G_0 = F\), with the conditions \(G_q(s)=G_q(-s)\) for \(1\le s \le n-q\).

For the base case \(n=1\), the condition (2.3) is equivalent to \(F(1)=F(-1)\), which is equivalent to the existence of an analytic function \(G_1\) such that \(\delta F = k G_1\). Thus the base of induction holds.

Let us now assume that the induction hypothesis is satisfied for some \(n\in {\mathbb {N}}\). The relation (2.3) for \(s=n+1\) is equivalent to \(\delta G_n (k) =0\) at \(k=0\), that is \(G_n(1) = G_n(-1)\), which is also equivalent to the existence of an analytic function \(G_{n+1}\) such that \(\delta G_n = k G_{n+1}\).

We also have \(k G_n = \delta G_{n-1}\) by the induction hypothesis. By applying Lemma 2.3 with \(s=1\) we get \( G_{n-1}(2) = G_{n-1}(-2)\). By considering \(k G_{n-1} = \delta G_{n-2}\) and using that by induction hypothesis \(G_{n-2}(1) = G_{n-2}(-1)\), we obtain by Lemma 2.3 with \(s=2\) that \(G_{n-2}(3) = G_{n-2}(-3)\). Similarly we obtain \(G_q(n-q+1)=G_q(-n+q-1)\) successively for \(q=n-3,\ldots , 0\), which completes the proof of the claim by induction.

For any fixed n let us now prove by induction on m that relations (2.3), (2.4) are equivalent to the existence of functions \(G_1, \ldots , G_n\) as in the first paragraph of the proof with the additional conditions \(G_q(s) = G_q(-s)\) for \(s=n-q + 2t\) with \(1\le t \le m\), \(0\le q \le n-1\), together with the existence of analytic functions \(H_1, \ldots , H_m\) satisfying \(\delta _2 H_q = k H_{q+1}\) for \(0 \le q \le m-1\), \(H_0=G_n\), and \(H_q(2s)=H_q(-2s)\) for \(1\le s \le m-q\).

The base case \(m = 0\) is already done. Let us now assume that the induction hypothesis is satisfied for some \(m \in {\mathbb {Z}}_{\ge 0}\). The relation (2.4) for \(t = m+1\) is equivalent to \(\delta _2 H_m(k) =0\) at \(k=0\), that is \(H_m(2) = H_m(-2)\), which is also equivalent to the existence of an analytic function \(H_{m+1}\) such that \(\delta _2 H_m = k H_{m+1}\).

We also have \(k H_m = \delta _2 H_{m-1}\) by the induction hypothesis. By applying Lemma  2.3, we obtain \(H_q(2(m-q+1)) = H_q(-2(m-q+1))\) successively for \(q=m-1,\ldots , 0\).

By considering \(k H_{0} = k G_n =\delta G_{n-1}\) and using that by induction hypothesis \(G_{n-1}(2m+1) = G_{n-1}(-2m-1)\), we obtain by Lemma 2.3 for \(s= 2(m+1)\) that \(G_{n-1}(2m+3) = G_{n-1}(-2m-3)\). Similarly we obtain \(G_q(n-q + 2m + 2) = G_q(-n+q - 2m - 2)\) successively for \(q= n-2,\ldots , 0\), which completes the proof by induction on m.

It follows that condition 1 in Lemma 2.4 implies condition 2.

Let us now show that condition 2 implies condition 1. Firstly, let us prove it in the case \(m=0\) by induction on n. The base case \(n=1\) is clear. Let us assume it for some \(n \in {\mathbb {N}}\). Suppose now that \(F(s) = F(-s)\) for \(s = 1, \dots , n, n+1\). By the induction hypothesis and the above analysis in the first part of the proof, we know that there exist functions \(G_1, \dots , G_n\) with properties as stated in the first paragraph of the proof. By using \(F(n\pm 1) = F(-n\mp 1)\) and \(\delta F = kG_1\) and applying Lemma 2.3 with \(s = n\), we get \(G_1(n) = G_1(-n)\). Similarly we obtain \(G_q(n+1-q) = G_q(-n-1+q)\) successively for \(q = 2, \dots , n\). In particular, \(G_n(1) = G_n(-1)\), which implies that the relation (2.3) holds for \(s = n+1\), completing the induction on n.

Suppose that condition 2 implies condition 1 for some \(m \in {\mathbb {Z}}_{\ge 0}\). Now suppose F satisfies condition 2 and \(F(n+2m+2) = F(-n-2m-2)\). By the induction hypothesis and the above analysis in the first part of the proof, we have existence of functions \(G_1, \dots , G_n\) and \(H_1, \dots , H_m\) with properties as stated in the fifth paragraph of the proof. By using the assumptions on F, the fact that \(\delta F = k G_1\) and applying Lemma 2.3 for \(s= n+2m+1\), we get that \(G_{1}(n+2m+1) = G_{1}(-n-2m-1)\). Similarly we obtain \(G_q(n+2m+2-q) = G_q(-n-2m-2+q)\) successively for \(q = 2, \dots , n\), and \(H_q(2m+2-2q) = H_q(-2m-2+2q)\) successively for \(q = 0, \dots , m\). In particular, \(H_m(2) = H_m(-2)\), which implies that the relation (2.4) holds for \(t = m+1\), which completes the proof. \(\square \)

The following Lemma is a generalisation to the present context of [13, Lemma 1] (see also [8, Proposition 1]). In its proof we use that for a polynomial p(z) we have \(p(z+\gamma ) = p(z) + \text {l.o.t.}\) by the binomial theorem, where “\(\text {l.o.t.}\)” denotes terms of lower order in z than \(\deg p(z)\). This Lemma will be used below to prove uniqueness of the BA function whenever such a function exists.

Lemma 2.5

Let \(\psi (z,x) = P(z, x) \exp \langle z, x \rangle \) \((z, x \in {\mathbb {C}}^n)\), where P(zx) is a polynomial in z. Suppose that \(\psi \) satisfies conditions (2.1) and (2.2) for some non-zero \(\alpha \in {\mathbb {C}}^n\), \(m_\alpha \in {\mathbb {N}}\), \(m_{2\alpha } \in {\mathbb {Z}}_{\ge 0}\). Then \(\langle \alpha , z \rangle ^{m_\alpha + m_{2\alpha }}\) divides the highest order term \(P_0(z, x)\) of P(zx).

Proof

Firstly, the condition (2.1) with \(s=1\) gives

$$\begin{aligned} 0 = \delta _{\alpha } \psi (z, x)&= \big (P_0(z, x)(\exp {\langle \alpha , x \rangle } - \exp {\langle -\alpha , x \rangle }) + \text {l.o.t.}\big )\exp {\langle z, x \rangle } \\&= \big (2 \sinh \langle \alpha , x \rangle P_0(z, x) + \text {l.o.t.}\big )\exp {\langle z, x \rangle } \end{aligned}$$

at \(\langle z, \alpha \rangle = 0\) for all \(x \in {\mathbb {C}}^n\). This implies that \(P_0(z, x)\) and the \(\text {l.o.t.}\) must be divisible by \(\langle z, \alpha \rangle \). Let \(P_0(z, x) = \langle z, \alpha \rangle P_0^{(1)}(z, x)\), and let

$$\begin{aligned} \psi ^{(1)}(z, x) = \frac{\delta _{\alpha }\psi (z, x)}{\langle z, \alpha \rangle } = \big (2 \sinh \langle \alpha , x \rangle P_0^{(1)}(z, x) + \text {l.o.t.}\big )\exp {\langle z, x \rangle }. \end{aligned}$$

The condition (2.1) with \(s=2\) gives

$$\begin{aligned} 0 = \delta _{\alpha } \psi ^{(1)}(z, x) = \big (4 \sinh ^2\langle \alpha , x \rangle P_0^{(1)}(z, x) + \text {l.o.t.}\big )\exp {\langle z, x \rangle } \end{aligned}$$

at \(\langle z, \alpha \rangle = 0\) for all \(x \in {\mathbb {C}}^n\). This implies that \(\langle z, \alpha \rangle \) divides \(P_0^{(1)}(z, x)\). In particular, \(P_0(z, x)\) is divisible by \(\langle z, \alpha \rangle ^2\): \( P_0(z, x)\langle z, \alpha \rangle ^{-2} = P_0^{(2)}(z, x) \) for some polynomial \(P_0^{(2)}\). We let \( \psi ^{(2)}(z, x) = {\langle z, \alpha \rangle }^{-1} \delta _{\alpha }\psi ^{(1)}(z, x). \) By iterating for \(s=3, \dots , m_\alpha \), we get that \(P_0(z, x)\) is divisible by \(\langle z, \alpha \rangle ^{m_{\alpha }}\), and we recursively obtain polynomials \( P_0^{(s)}(z, x) = P_0^{(s-1)}(z, x)\langle z, \alpha \rangle ^{-1} = P_0(z, x)\langle z, \alpha \rangle ^{-s} \) and functions

$$\begin{aligned} \psi ^{(s)}(z, x) = \frac{\delta _{\alpha }\psi ^{(s-1)}(z, x)}{\langle z, \alpha \rangle } = \big (2^s \sinh ^s\langle \alpha , x \rangle P_0^{(s)}(z, x) + \text {l.o.t.}\big )\exp {\langle z, x \rangle }. \end{aligned}$$

The condition (2.2) with \(t=1\) then gives

$$\begin{aligned} 0 = \delta _{2\alpha } \psi ^{m_\alpha }(z, x) = \big (2^{m_\alpha + 1} \sinh ^{m_\alpha }\langle \alpha , x \rangle \sinh \langle 2\alpha , x \rangle P_0^{(m_\alpha )}(z, x) + \text {l.o.t.}\big )\exp {\langle z, x \rangle } \end{aligned}$$

at \(\langle z, \alpha \rangle = 0\). This implies that \(\langle z, \alpha \rangle \) divides \(P_0^{(m_\alpha )}(z, x)\) and the l.o.t., in particular, \(P_0(z, x)\) is divisible by \(\langle z, \alpha \rangle ^{m_\alpha +1}\). Continuing this iteratively for \(t=2, \dots , m_{2\alpha }\) completes the proof. \(\square \)

Lemma 2.5 has the following consequence.

Lemma 2.6

Let \(\psi (z,x) = P(z, x) \exp \langle z, x \rangle \) \((z, x \in {\mathbb {C}}^n)\) satisfy condition 2 in Definition 2.1, where P(zx) is a polynomial in z with highest order term \(P_0(z, x)\). Then \(\prod _{\gamma \in R_+} \langle \gamma , z \rangle ^{m_\gamma }\) divides \(P_0(z, x)\).

Proof

Lemma 2.5 gives that \(P_0(z, x)\) is divisible by \(\langle z, \alpha \rangle ^{m_\alpha + m_{2\alpha }}\) for all \(\alpha \in R^r_+\). This is a constant multiple of \(\langle z, \alpha \rangle ^{m_{\alpha }} \langle z, 2\alpha \rangle ^{m_{2\alpha }}\). The statement follows since we are assuming that collinear vectors in \(R_+\) are only of the form \(\alpha \), \(2\alpha \). \(\square \)

Lemma 2.6 leads to the following uniqueness statement analogous to [8, Proposition 1] (cf. also [13, Proposition 1]) with a completely analogous proof.

Proposition 2.7

If a BA function satisfying Definition 2.1 exists, then it is unique. In particular, if a BA function for \(AG_2\) exists then it is unique.

The next Theorem generalises [8, Theorem 1] to the present context, and it is proved analogously to how that result is proved. It states that if the BA function satisfying Definition 2.1 exists, then it is a joint eigenfunction of a commutative ring of differential operators in the variables x. Let us first define an isomorphic ring of polynomials.

Let \({{\mathcal {R}}}\) be the ring of polynomials \(p(z)\in {\mathbb {C}}[z_1, \ldots , z_n]\) satisfying

$$\begin{aligned} p(z + s\alpha ) = p(z-s\alpha ) \text { at } \langle z, \alpha \rangle = 0, s = 1, 2, 3, \dots , m_\alpha , m_\alpha + 2, \dots , m_\alpha + 2m_{2\alpha } \end{aligned}$$

for all \(\alpha \in R^r_+\) (we remark that this is similar to condition 2 in Definition 2.1). We have \(z^2 \in {{\mathcal {R}}}\). Indeed, for any \(\gamma \in {\mathbb {C}}^n\), \(s \in {\mathbb {N}}\), we have \((z\pm s\gamma )^2 = z^2 \pm 2s \langle z, \gamma \rangle + s^2\gamma ^2 = z^2 + s^2\gamma ^2\) at \(\langle z, \gamma \rangle = 0\).

For a polynomial \(p(z) = p(z_1, \dots , z_n)\), by \(p(\partial _x)\) we will mean \(p(\partial _{x_1}, \dots , \partial _{x_n})\). For example, if \(p(z) = z^2 = z_1^2 + \dots + z_n^2\), then \(p(\partial _x) = \Delta \) is the Laplace operator in n dimensions acting in the variables x.

The following statement takes place.

Theorem 2.8

If the BA function \(\psi (z, x)\) satisfying Definition 2.1 exists, then for any \(p(z) \in {{\mathcal {R}}}\) there is a differential operator \(L_p( x, \partial _x)\) with highest order term \(p(\partial _x)\) such that

$$\begin{aligned} L_p( x, \partial _x) \psi (z, x) = p(z) \psi (z, x). \end{aligned}$$

For any \(p, q \in {{\mathcal {R}}}\), the operators \(L_p\) and \(L_q\) commute.

The next Lemma will be used below to prove that the differential operator \(L_{z^2}\) from Theorem 2.8 corresponding to the element \(z^2\) of \({{\mathcal {R}}}\) coincides with the generalised CMS Hamiltonian (1.3) associated to the configuration \(A = R_+\). The next Lemma is a generalisation of [13, Lemma 2] (see also [8]).

Lemma 2.9

Suppose \(\psi (z, x) = P(z,x) \exp \langle z, x \rangle \) satisfies Definition 2.1. Let \(N = \sum _{\gamma \in R_+} m_\gamma \). Write \(P(z,x) = \sum _{i=0}^N P_i(z, x)\) where \(P_0(z, x) = \prod _{\gamma \in R_+} \langle \gamma , z \rangle ^{m_\gamma }\) and \(P_i\) are polynomials homogeneous in z with \(\deg P_i = N - i\). Then

$$\begin{aligned} \frac{P_1(z, x)}{P_0(z, x)} = - \sum _{\gamma \in R_+} \frac{m_\gamma (m_\gamma + 2m_{2\gamma } + 1)\gamma ^2}{2\langle \gamma , z \rangle } \coth \langle \gamma , x \rangle . \end{aligned}$$
(2.5)

Proof

We introduce the following notations

$$\begin{aligned} A(z) = P_0(z, x), \quad A_\alpha (z) = \frac{A(z)}{\langle \alpha , z \rangle }, \quad A_{\alpha \beta }(z) = \frac{A(z)}{\langle \alpha , z \rangle \langle \beta , z \rangle } = A_{\beta \alpha }(z), \quad \text {etc.} \end{aligned}$$

As a shorthand, we will write \(A_{\alpha ^2}\) for \(A_{\alpha \alpha }\), etc. In this notation, the equality (2.5) can be written as \(P_1 = -\sum _{\gamma \in R_+} \frac{1}{2}m_\gamma (m_\gamma + 2 m_{2\gamma } + 1)\gamma ^2 A_\gamma (z) \coth \langle \gamma , x \rangle \).

The strategy is as follows. Applying condition 2 in Definition 2.1 and equating the homogeneous terms of each degree on the right- and left-hand sides, one gets equations for \(P_1, \dots , P_N\). We are assuming that \(\psi \) satisfies the definition, hence a solution exists. We are only interested in \(P_1\) here.

Fix any \(\alpha \in R^r_+\). We will omit arguments of functions whenever convenient, and write \(e^\alpha \) for \(\exp \langle \alpha , x \rangle \). By the binomial theorem \(A(z \pm \alpha )\) has the following homogeneous components

$$\begin{aligned} A(z \pm \alpha ) = A(z) \pm \sum _{\beta \in R_+} m_\beta \langle \alpha , \beta \rangle A_\beta (z) + \text {l.o.t.}\end{aligned}$$

and similarly \(P_1(z \pm \alpha , x) = P_1(z, x) + \text {l.o.t.}\)

We will use the fact that for this fixed \(\alpha \) condition 2 in Definition 2.1 admits the equivalent characterisation given in Lemma 2.2. We will apply condition (2.1) for \(s = 1, \dots , m_\alpha \), and then if \(m_{2\alpha } > 0\) also condition (2.2) for \(t = 1, \dots , m_{2\alpha }\), in order to see what conditions this places on \(P_1\). Applying condition (2.1) with \(s = 1\) gives \(0 = \delta _\alpha \psi (z, x) = (P(z + \alpha , x)e^\alpha - P(z - \alpha , x)e^{-\alpha }) \exp \langle z, x \rangle \) at \(\langle \alpha , z \rangle = 0\). This equation can be rearranged (after dividing through by \(\exp \langle z, x \rangle \)) as

$$\begin{aligned} 0 = A(z)(e^\alpha - e^{-\alpha }) + \sum _{\beta \in R_+} m_\beta \langle \alpha , \beta \rangle A_\beta (z)(e^\alpha + e^{-\alpha }) + P_1(z,x)(e^\alpha - e^{-\alpha }) + \text {l.o.t.} \end{aligned}$$
(2.6)

Note that for \(\beta \ne \alpha \) and \(\beta \ne 2\alpha \) we have \(A_\beta (z) = 0\) at \(\langle \alpha , z \rangle = 0\). By identifying the homogeneous terms of degree \(N-1\) in (2.6), we get at \(\langle \alpha , z \rangle = 0\) that

$$\begin{aligned} 0&= \left( m_\alpha \alpha ^2 A_\alpha + 2m_{2\alpha } \alpha ^2 A_{2\alpha }\right) (e^\alpha + e^{-\alpha }) + P_1(e^\alpha - e^{-\alpha }) \nonumber \\&= (m_\alpha + m_{2\alpha }) \alpha ^2 A_\alpha (e^\alpha + e^{-\alpha }) + P_1(e^\alpha - e^{-\alpha }). \end{aligned}$$
(2.7)

Notice that this can be rearranged as \( P_1 = - (m_\alpha + m_{2\alpha })\alpha ^2 A_\alpha \coth \langle \alpha , x \rangle \) at \(\langle \alpha , z \rangle = 0\).

Assume that \(m_\alpha > 1\). Then \(A_\alpha (z) = 0 \) at \(\langle \alpha , z \rangle = 0\), which forces \(P_1 = 0\) at \(\langle \alpha , z \rangle = 0\). This is equivalent to divisibility of \(P_1\) by \(\langle \alpha , z \rangle \). We are next going to consider the condition (2.1) from Lemma 2.2 with \(s=2\), and we are now interested in the degree \(N-2\) terms in the polynomial part of \((\delta _\alpha \circ \langle \alpha , z \rangle ^{-1})\delta _\alpha \psi (z, x)\). These terms arise in two ways: firstly they come from the degree \(N-2\) terms in the polynomial part of \(\langle \alpha , z \rangle ^{-1} \delta _\alpha \psi \) and further action of \(\delta _\alpha \) on \(\exp \langle \alpha , z \rangle \); secondly they come from application of \(\delta _\alpha \) onto the terms in \(\langle \alpha , z \rangle ^{-1} \delta _\alpha \psi \) whose polynomial part has degree \(N-1\). Let us examine these two possibilities individually. From (2.6) we see that the terms of degree \(N-2\) in \(\exp \langle -z, x \rangle \langle \alpha , z \rangle ^{-1} \delta _\alpha \psi \) are

$$\begin{aligned}&\sum _{\beta \in R_+} m_\beta \langle \alpha , \beta \rangle A_{\alpha \beta }(z)(e^\alpha + e^{-\alpha }) + \frac{P_1(z,x)}{\langle \alpha , z \rangle }(e^\alpha - e^{-\alpha }) \nonumber \\&\quad = (m_\alpha + m_{2\alpha })\alpha ^2 A_{\alpha ^2}(z) (e^\alpha + e^{-\alpha }) + \sum _{\beta \in R_+ \setminus \{ \alpha , 2\alpha \}} m_\beta \langle \alpha , \beta \rangle A_{\alpha \beta }(z)(e^\alpha + e^{-\alpha })\nonumber \\&\qquad + \frac{P_1(z,x)}{\langle \alpha , z \rangle }(e^\alpha - e^{-\alpha }) . \end{aligned}$$
(2.8)

And from (2.6) we see that the term of degree \(N-1\) in \(\exp \langle -z, x \rangle \langle \alpha , z \rangle ^{-1}\delta _\alpha \psi \) is \(A_\alpha (z) (e^\alpha - e^{-\alpha })\). Since we need to know how \(\delta _\alpha \) acts on it, we note here that

$$\begin{aligned} A_\alpha (z \pm \alpha ) = A_\alpha (z) \pm (m_\alpha + m_{2\alpha } - 1)\alpha ^2 A_{\alpha ^2}(z) \pm \sum _{\beta \in R_+ \setminus \{\alpha , 2\alpha \}} m_\beta \langle \alpha , \beta \rangle A_{\alpha \beta }(z) + \text {l.o.t.}\end{aligned}$$
(2.9)

Let us now consider condition (2.1) from Lemma 2.2 with \(s=2\), namely \( 0 = (\delta _\alpha \circ \langle \alpha , z \rangle ^{-1})\delta _\alpha \psi (z, x)\) at \(\langle \alpha , z \rangle = 0\). By making use of formulas (2.8) and (2.9) this condition can be rearranged as

$$\begin{aligned} \begin{aligned} 0 =&\, A_\alpha (z) (e^\alpha - e^{-\alpha })^2 + (m_\alpha + m_{2\alpha } - 1)\alpha ^2 A_{\alpha ^2}(z) (e^\alpha - e^{-\alpha })(e^\alpha + e^{-\alpha }) \\ {}&+ (m_\alpha + m_{2\alpha })\alpha ^2 A_{\alpha ^2}(z) (e^\alpha + e^{-\alpha })(e^\alpha - e^{-\alpha }) \\ {}&+ \sum _{\beta \in R_+ \setminus \{\alpha , 2\alpha \}} 2m_\beta \langle \alpha , \beta \rangle A_{\alpha \beta }(z)(e^\alpha + e^{-\alpha })(e^\alpha - e^{-\alpha }) \\&+ \frac{P_1(z,x)}{\langle \alpha , z \rangle }(e^\alpha - e^{-\alpha })^2 + \text {l.o.t.}\end{aligned} \end{aligned}$$
(2.10)

Note that for \(\beta \ne \alpha \) and \(\beta \ne 2\alpha \) we have \(A_{\alpha \beta }(z) = 0\) at \(\langle \alpha , z \rangle = 0\). By identifying the homogeneous terms of degree \(N-2\) in (2.10) we get at \(\langle \alpha , z \rangle = 0\) that

$$\begin{aligned} 0 = \left( (m_\alpha + m_{2\alpha }) + (m_\alpha + m_{2\alpha } -1) \right) \alpha ^2 A_{\alpha ^2}(z)(e^\alpha - e^{-\alpha })(e^\alpha + e^{-\alpha }) + \frac{P_1}{\langle \alpha , z \rangle }(e^\alpha - e^{-\alpha })^2. \end{aligned}$$
(2.11)

If \(m_\alpha > 2\), then \(A_{\alpha ^2}(z) = 0\) at \(\langle \alpha , z \rangle = 0\), so (2.11) gives divisibility of \(\langle \alpha , z \rangle ^{-1}P_1\) by \(\langle \alpha , z \rangle \). Continuing iteratively for \(s = 3, \dots , m_\alpha \) the process above, after the \(s = m_\alpha \) step we get that at \(\langle \alpha , z \rangle = 0\)

$$\begin{aligned} \begin{aligned} 0 =&\left( (m_\alpha + m_{2\alpha }) + (m_\alpha + m_{2\alpha } -1)+ \dots \right. \\&\left. +(1 + m_{2\alpha }) \right) \alpha ^2 A_{\alpha ^{m_\alpha }}(z)(e^\alpha - e^{-\alpha })^{m_\alpha - 1}(e^\alpha + e^{-\alpha }) \\&+ \frac{P_1}{\langle \alpha , z \rangle ^{m_\alpha - 1}}(e^\alpha - e^{-\alpha })^{m_\alpha }. \end{aligned} \end{aligned}$$
(2.12)

We have

$$\begin{aligned} (m_\alpha + m_{2\alpha }) + (m_\alpha + m_{2\alpha } -1)+ \dots +(1 + m_{2\alpha })&= m_\alpha m_{2\alpha } + (1 + \dots + m_\alpha )\\&= \frac{m_\alpha (m_\alpha +2m_{2\alpha } + 1)}{2}. \end{aligned}$$

Therefore (2.12) can be rearranged as

$$\begin{aligned} \frac{P_1}{\langle \alpha , z \rangle ^{m_\alpha - 1}} = -\frac{m_\alpha (m_\alpha +2m_{2\alpha } + 1)}{2} \alpha ^2 A_{\alpha ^{m_\alpha }}(z) \coth \langle \alpha , x \rangle \text { at } \langle \alpha , z \rangle = 0.\qquad \quad \end{aligned}$$
(2.13)

If \(m_{2\alpha } = 0\) for all \(\alpha \in R_+\) for which \(\frac{\alpha }{2} \notin R_+\), then for each such \(\alpha \) there are no more conditions in Definition 2.1 to be satisfied, and \(P_1\) given by (2.5) satisfies (2.13). This completes the proof in this case, by uniqueness. Otherwise we continue and consider condition (2.2). Note that the highest order term that we are carrying forward is \(A_{\alpha ^{m_\alpha -1}}(z)\).

Assume \(m_{2\alpha } > 0\). Then \(A_{\alpha ^{m_\alpha }}(z) = 0\) at \(\langle \alpha , z \rangle = 0\), so (2.13) implies divisibility of \(\langle \alpha , z \rangle ^{1-m_\alpha }P_1\) by \(\langle \alpha , z \rangle \). Consider the condition (2.2) from Lemma 2.2 with \(t=1\). Note that

$$\begin{aligned} A_{\alpha ^{m_\alpha }}(z \pm 2\alpha ) = A_{\alpha ^{m_\alpha }}(z) \pm m_{2\alpha }(2\alpha )^2 A_{(2\alpha )\alpha ^{m_\alpha }} + f(z) \end{aligned}$$

where the term f(z) vanishes at \(\langle \alpha , z \rangle = 0\). Thus at this step we obtain

$$\begin{aligned} 0= & {} m_{2\alpha }(2\alpha )^2 A_{(2\alpha )\alpha ^{m_\alpha }}(z) (e^{2\alpha } + e^{-2\alpha })(e^\alpha - e^{-\alpha })^{m_\alpha }\nonumber \\{} & {} + \frac{m_\alpha (m_\alpha +2m_{2\alpha } + 1)\alpha ^2}{2} A_{\alpha ^{m_\alpha + 1}}(z)(e^{2\alpha } - e^{-2\alpha })(e^\alpha - e^{-\alpha })^{m_\alpha - 1}(e^\alpha + e^{-\alpha }) \nonumber \\{} & {} + \frac{P_1}{\langle \alpha , z \rangle ^{m_\alpha }}(e^{2\alpha } - e^{-2\alpha })(e^\alpha - e^{-\alpha })^{m_\alpha } \end{aligned}$$
(2.14)

at \(\langle \alpha , z \rangle = 0\). This can be rearranged as

$$\begin{aligned} \frac{P_1}{\langle \alpha , z \rangle ^{m_\alpha }}{} & {} = -m_{2\alpha }(2\alpha )^2A_{(2\alpha )\alpha ^{m_\alpha }}(z) \coth \langle 2\alpha , x \rangle \nonumber \\{} & {} \quad - \frac{m_\alpha (m_\alpha +2m_{2\alpha } + 1)\alpha ^2}{2} A_{\alpha ^{m_\alpha + 1}}(z) \coth \langle \alpha , x \rangle . \end{aligned}$$
(2.15)

Assume \(m_{2\alpha } = 1\), then one can check that \(P_1\) given by (2.5) satisfies (2.15). Otherwise \(m_{2\alpha } > 1\) and we continue recursively. For \(t=2\), for example, to treat the highest order term we need to consider

$$\begin{aligned} A_{\alpha ^{m_\alpha + 1}}(z \pm 2\alpha ) = A_{\alpha ^{m_\alpha + 1}}(z) \pm (m_{2\alpha }-1)(2\alpha )^2 A_{(2\alpha )\alpha ^{m_\alpha +1}} + g(z) \end{aligned}$$

where the term g(z) vanishes at \(\langle \alpha , z \rangle = 0\).

We thus iterate for \(t = 2, \dots , m_{2\alpha }\). One has to use that \(1 + \dots + m_{2\alpha } = \frac{1}{2}m_{2\alpha }(m_{2\alpha } + 1)\), and at the end we get

$$\begin{aligned} \begin{aligned} \frac{P_1}{\langle \alpha , z \rangle ^{m_\alpha + m_{2\alpha }-1}} =&-\frac{m_{2\alpha }(m_{2\alpha } + 1)(2\alpha )^2}{2} A_{(2\alpha ) \alpha ^{m_\alpha + m_{2\alpha }-1}} \coth \langle 2\alpha , x \rangle \\&- \frac{m_\alpha (m_\alpha +2m_{2\alpha } + 1)\alpha ^2}{2} A_{\alpha ^{m_\alpha + m_{2\alpha }}} \coth \langle \alpha , x \rangle \end{aligned} \end{aligned}$$
(2.16)

at \(\langle \alpha , z \rangle = 0\). It is straightforward to check that \(P_1\) given by (2.5) satisfies (2.16) for all \(\alpha \in R_+\) such that \(\frac{\alpha }{2} \notin R_+\).

If there existed some other \({\widetilde{P}}_1\) of degree \(N-1\) satisfying (2.16), then (2.16) would imply that \(P_1 - {\widetilde{P}}_1\) is divisible by \(\langle \alpha , z \rangle ^{m_\alpha + m_{2\alpha }}\) for all \(\alpha \in R_+\) with \(\frac{\alpha }{2} \notin R_+\). But unless \(P_1 - {{\widetilde{P}}}_1 = 0\) this would mean that \(P_1 - {{\widetilde{P}}}_1\) has degree at least N which is not possible. This completes the proof. \(\square \)

The next Proposition has a completely analogous proof to [13, Proposition 2], it just uses Lemma 2.9 in place of [13, Lemma 2] (see also [8]).

Proposition 2.10

With notations and assumptions as in Theorem 2.8, the element \(p(z) = z^2\) of \({{\mathcal {R}}}\) corresponds to the differential operator

$$\begin{aligned} L_{z^2} = \Delta - \sum _{\gamma \in R_+} \frac{m_\gamma (m_\gamma + 2m_{2\gamma } + 1)\gamma ^2}{\sinh ^2 \langle \gamma , x \rangle }, \end{aligned}$$

which coincides (up to sign) with the generalised CMS operator (1.3) for \({\mathcal {A}}= (R_+, m)\).

This implies integrability of the Hamiltonian and provides a quantum integral of motion \(L_p\) for each \(p(z) \in {{\mathcal {R}}}\). The following statement is one of the main results of this paper. The proof will be presented below.

Theorem 2.11

There exists a BA function for the configuration \(AG_2\).

3 Generalised Macdonald–Ruijsenaars Operators

In Sect. 4, we will utilise a method for explicit construction of BA functions which was proposed by Chalykh [4] (see also [13] for further examples where this method is applied, and [3, 10] for the differential case). The construction uses (generalised) Macdonald–Ruijsenaars difference operators. The key element of the method is the preservation of a space of quasi-invariant analytic functions under the action of the Macdonald–Ruijsenaars operators. In this Section, we find sufficient conditions for a sufficiently general difference operator to preserve such a ring of quasi-invariants.

Let \(W = \langle s_\alpha :\alpha \in R \rangle \), where \(s_\alpha \) is the orthogonal reflection about the hyperplane \(\langle \alpha , x \rangle =0\). We assume now that the collection R is W-invariant, that is \(w(R) = R\) for all \(w \in W\), and that the multiplicity map m is W-invariant, too. Let \(u^\vee = 2 u / \langle u, u \rangle \) for any \(u\in {\mathbb {C}}^n\) such that \(\langle u, u \rangle \ne 0\).

Let \({{\mathcal {R}}}^a\) be the ring of analytic functions p(z) such that

$$\begin{aligned} p(z + t\alpha ) = p(z - t\alpha ) \text { at } \langle \alpha , z \rangle = 0 \text { for } t \in A_\alpha \end{aligned}$$

for all \(\alpha \in R^r_+\), where \(A_\alpha \subset {\mathbb {N}}\) specifies the axiomatics that one wants to consider. For instance, in the next Sections we will use \(A_\alpha = \{1, 2, \dots , m_\alpha , m_\alpha + 2, \dots , m_\alpha + 2m_{2\alpha } \}\). We assume that \(A_{|w\alpha |} = A_\alpha \) for all \(w \in W\), \(\alpha \in R^r_+\), where \(|w\alpha | = w\alpha \) if \(w\alpha \in R^r_+\) and \(|w\alpha | = -w\alpha \) if \(w\alpha \in -R^r_+\). For \(\alpha \in R\), we let \({{\,\textrm{sgn}\,}}\alpha = 1\) if \(\alpha \in R_+\) and \({{\,\textrm{sgn}\,}}\alpha =-1\) if \(\alpha \in -R_+\).

Let \(S \subset {\mathbb {C}}^n\setminus \{0\}\) be a W-invariant finite collection of vectors. Let \(z \in {\mathbb {C}}^n\) and for any \(\gamma \in {\mathbb {C}}^n\) let \(T_\gamma \) be the shift operator that acts on functions f(z) by \(T_\gamma f(z) = f(z + \gamma )\). We are interested in difference operators D of the form

$$\begin{aligned} D = \sum _{\tau \in S} a_\tau (z) (T_\tau - 1), \end{aligned}$$
(3.1)

where \(a_\tau \) are rational functions with the following three properties:

\((D_{1})\):

\(\deg a_\tau = 0\).

\((D_{2})\):

\(a_\tau (z)\) has a simple pole at \(\langle \alpha , z \rangle = c\alpha ^2\) for \(\alpha \in R^r_+\), \(c \in {\mathbb {C}}\) if and only if \(\lambda = s_\alpha (\tau ) - 2c\alpha \in S\cup \{0\}\) and

$$\begin{aligned} \langle \tau + c\alpha , \alpha \rangle /\alpha ^2 = c+\langle \tau , (2\alpha )^\vee \rangle \in A_\alpha \cup (-A_\alpha ). \end{aligned}$$

There are no other singularities in \(a_\tau \). Denote the set of all such pairs \((\alpha , c)\) for this \(\tau \) by \(S_\tau \).

\((D_{3})\):

\(wa_\tau = a_{w\tau }\) for all \(w \in W\).

The condition \((D_{2})\) implies that if \(a_\tau \) has a singularity \(\langle \alpha , z \rangle =c \alpha ^2\), then for any such z the vectors \(z + \tau \) and \(z + \lambda \) are of the form \(z+\tau = {{{\widetilde{z}}}} + t\alpha \) and \(z + \lambda = {{{\widetilde{z}}}} - t\alpha \) for some \({{{\widetilde{z}}}}\) with \(\langle \alpha , {{{\widetilde{z}}}} \rangle = 0\) and \(t = c + \langle \tau , (2\alpha ^\vee ) \rangle \in A_\alpha \cup (-A_\alpha )\). We note also that \(\lambda \ne \tau \) since \(0 \notin A_\alpha \).

Note that if \(a_\tau \) has a singularity \(\langle \alpha , z \rangle =c \alpha ^2\) and the corresponding \(\lambda \ne 0\), then by condition \((D_2)\) \(a_\lambda (z)\) necessarily also has a singularity at \(\langle \alpha , z \rangle = c\alpha ^2\), since \(s_\alpha (\lambda ) - 2c\alpha = \tau \in S\) and \(\langle \lambda + c\alpha , \alpha \rangle = \langle s_\alpha (\tau + c\alpha ), \alpha \rangle = -\langle \tau + c\alpha , \alpha \rangle \). In other words, by condition \((D_2)\) we have \((\alpha , c) \in S_\tau \) if and only if \((\alpha , c) \in S_\lambda \) provided that both \(\tau , \lambda \ne 0\). We additionally observe the following.

Lemma 3.1

For any \(w \in W\), \((\alpha , c) \in S_\tau \) if and only if \((|w\alpha |, {{\,\textrm{sgn}\,}}(w\alpha ) c) \in S_{w\tau }\).

Proof

Let \(\varepsilon = {{\,\textrm{sgn}\,}}(w\alpha )\). Since \(s_{|w\alpha |} = w s_\alpha w^{-1}\) and \(\varepsilon |w\alpha | = w\alpha \), we get that \( s_{|w\alpha |}(w\tau ) - 2\varepsilon c|w\alpha | = w(s_\alpha (\tau ) - 2c\alpha ) \) belongs to \(S\cup \{0\}\) if and only if \(s_\alpha (\tau ) - 2c\alpha \in S\cup \{0\}\), by W-invariance of S. Furthermore, \(\langle w\tau + \varepsilon c|w\alpha |, |w\alpha | \rangle /|w\alpha |^2 =\pm \langle \tau + c\alpha , \alpha \rangle /\alpha ^2\), and \(A_\alpha = A_{|w\alpha |}\), by assumption. \(\square \)

More explicitly, we are looking at operators of the form

$$\begin{aligned} D = \sum _{\tau \in S} P_\tau (z) \left( \prod _{(\alpha , c) \in S_\tau } \left( \langle \alpha , z \rangle - c\alpha ^2 \right) ^{-1} \right) (T_\tau -1) \end{aligned}$$

for some polynomials \(P_\tau (z)\) of degree \(|S_\tau |\) so that \(\deg a_\tau = 0\) and such that the condition \((D_{3})\) holds. We want to find some sufficient conditions that would ensure that D preserves the ring \({{\mathcal {R}}}^a\).

Theorem 3.2

Suppose the operator (3.1) satisfies conditions \((D_{2})\) and \((D_{3})\). Then for any \(\alpha \in R^r_+\) and arbitrary \(p(z) \in {{\mathcal {R}}}^a\), we have the following two properties.

  1. 1.

    Dp(z) is non-singular at \(\langle \alpha , z \rangle = 0\). Moreover, for any \(c \ne 0\), provided that for all \(\tau \in S\) such that \((\alpha , c) \in S_\tau \) and such that \(\lambda = s_\alpha (\tau ) - 2c\alpha \ne 0\) we have

    $$\begin{aligned} {{\,\textrm{res}\,}}_{\langle \alpha , z \rangle = c \alpha ^2}(a_\tau + a_\lambda ) = 0, \end{aligned}$$

    then Dp(z) is non-singular at \(\langle \alpha , z \rangle = c\alpha ^2\).

  2. 2.

    Suppose, in addition to assumptions of part 1, that for all \(\tau \in S\) and any \(t \in A_\alpha \), the following is satisfied whenever \(t + \langle \tau , (2\alpha )^\vee \rangle \notin A_\alpha \cup (-A_\alpha ) \cup \{ 0 \}\):

    1. (a)

      \(a_\tau (z+t\alpha ) = 0 \text { at } \langle \alpha , z \rangle = 0\) (equivalently, \(P_\tau (z)\) has a factor of \(\langle \alpha , z \rangle - t\alpha ^2\)),

    or

    1. (b)

      \(\lambda = s_\alpha (\tau ) - 2t\alpha \in S\) and \(a_{\lambda }(z+t\alpha ) = a_\tau (z+t\alpha )\) at \(\langle \alpha , z \rangle = 0\).

    Then \(Dp(z + t\alpha ) = Dp(z-t\alpha ) \text { at } \langle \alpha , z \rangle = 0 \text { for all } t \in A_\alpha .\)

Proof

  1. 1.

    Let \(c \in {\mathbb {C}}\). We want to show that the residue at \(\langle \alpha , z \rangle = c\alpha ^2\) of Dp(z) is zero. Take any \(\tau \in S\) such that \((\alpha , c) \in S_\tau \). Write \(\tau + c\alpha = t \alpha + \gamma \), where \(\langle \gamma , \alpha \rangle =0\) and \(t = \langle \tau + c\alpha , \alpha \rangle /\alpha ^2\). Let \(\lambda = s_\alpha (\tau ) - 2c\alpha \). Then \(\lambda + c\alpha = s_\alpha (\tau + c\alpha ) = -t\alpha + \gamma \). At \(\langle \alpha , z \rangle = c\alpha ^2\) we thus have

    $$\begin{aligned} p(z+\tau )&= p((z - c\alpha + \gamma ) + t\alpha ) = p((z - c\alpha + \gamma ) - t\alpha ) = p(z+\lambda ) \end{aligned}$$

    as \(\langle z-c\alpha + \gamma , \alpha \rangle = 0\) and \(t \in A_\alpha \cup (-A_\alpha )\) by assumption \((D_{2})\). So, if \(\lambda = 0\) then the simple pole at \(\langle \alpha , z \rangle = c\alpha ^2\) present in \(a_\tau (z)\) is cancelled by \((T_\tau -1)[p(z)] = p(z+\tau )-p(z)\). And if \(\lambda \ne 0\), then the sum

    $$\begin{aligned} a_\tau (z)(p(z + \tau )-p(z)) + a_\lambda (z)(p(z + \lambda )-p(z)) \end{aligned}$$

    contributes zero to the residue provided that the residue of \(a_\tau + a_\lambda \) is zero. For \(c \ne 0\), the latter is satisfied by assumption. In the case of \(c = 0\), we have \(\lambda = s_\alpha (\tau )\), hence \(a_\tau (s_{\alpha }(z)) = a_{\lambda }(z)\) by the symmetry \((D_{3})\) of the operator, so we get

    $$\begin{aligned} \lim _{\langle \alpha , z \rangle \rightarrow 0} \langle \alpha , z \rangle a_\tau (z) = \lim _{\langle \alpha , z \rangle \rightarrow 0} \langle \alpha , s_{\alpha }(z) \rangle a_\tau (s_{\alpha }(z)) = - \lim _{\langle \alpha , z \rangle \rightarrow 0} \langle \alpha , z \rangle a_{\lambda }(z), \end{aligned}$$

    that is, the residue of \(a_\tau (z)\) at \(\langle \alpha , z \rangle = 0\) is minus that of \(a_{\lambda }(z)\), as needed.

  2. 2.

    Fix \(t \in A_\alpha \). By symmetry \((D_{3})\) of the operator, at \(\langle \alpha , z \rangle = 0\) we have \(a_\mu (z+s\alpha ) = a_{s_\alpha (\mu )}(z-s\alpha )\) for all generic \(s \in {\mathbb {C}}\) and generic \(z \in {\mathbb {C}}^n\) with \(\langle \alpha , z \rangle = 0\), \(\mu \in S\). By using that \(s_\alpha (S) = S\), we can thus write \(Dp(z+t\alpha ) - Dp(z-t\alpha )\) at \(\langle \alpha , z \rangle = 0\) as

    $$\begin{aligned} \lim _{s \rightarrow t}\sum _{\mu \in S} a_{\mu }(z+s\alpha )\bigg (p(z+s\alpha +\mu ) - p(z-s\alpha +s_\alpha (\mu )) - p(z + s\alpha ) + p(z-s\alpha )\bigg ).\qquad \end{aligned}$$
    (3.2)

Firstly, let us consider any \(\tau \in S\) for which \(a_{\tau }(z+t\alpha )\) is non-singular at \(\langle \alpha , z \rangle = 0\) (for generic z). Then the corresponding \(\mu = \tau \) term in the sum (3.2) can be simplified to

$$\begin{aligned} a_{\tau }(z+t\alpha )\left( p(z+t\alpha +\tau ) - p(z-t\alpha +s_\alpha (\tau )\right) \end{aligned}$$
(3.3)

as \(p(z) \in {{\mathcal {R}}}^a\). Let \(\tau = b\alpha + \delta \), where \(\langle \delta , \alpha \rangle =0\) and \(b = \langle \tau , (2\alpha )^\vee \rangle \). Then \(s_\alpha (\tau ) = -b\alpha + \delta \). Thus

$$\begin{aligned}{} & {} p(z+t\alpha +\tau ) - p(z-t\alpha +s_\alpha (\tau ))\nonumber \\ {}{} & {} \quad = p(z+\delta + (t+b)\alpha ) - p(z+\delta - (t+b)\alpha ), \end{aligned}$$
(3.4)

where \(\langle \alpha , z+\delta \rangle = 0\). Hence if \(t + b \in A_\alpha \cup (-A_\alpha ) \cup \{ 0 \}\), then (3.4) equals zero, and the whole term (3.3) vanishes. Else, we have by assumption two possibilities (cases (a) and (b)). If \(a_\tau (z+t\alpha ) = 0\) at \(\langle \alpha , z \rangle = 0\), then (3.3) vanishes; and if \(a_\tau (z+t\alpha ) \ne 0\), then \(\lambda = s_\alpha (\tau )-2t\alpha \in S\setminus \{\tau \}\) and \(a_{\lambda }(z+t\alpha ) = a_\tau (z+t\alpha )\). (Note that \(t + b \notin A_\alpha \cup (-A_\alpha ) \cup \{ 0 \}\) implies that \(\lambda \ne \tau \), and due to \((D_{2})\) also that \(a_{\lambda }(z+t\alpha )\) is well-defined at \(\langle \alpha , z \rangle = 0\) for generic z). In the latter case, the term corresponding to \(\mu = \lambda \) in the sum (3.2) can be simplified to

$$\begin{aligned}{} & {} a_{\lambda }(z+t\alpha )\left( p(z+t\alpha +\lambda ) - p(z-t\alpha +s_\alpha (\lambda )\right) \\{} & {} \quad = a_{\tau }(z+t\alpha ) \left( p(z-t\alpha + s_\alpha (\tau )) - p(z+t\alpha + \tau ) \right) , \end{aligned}$$

which is the negative of (3.3), hence the terms corresponding to \(\mu =\tau \) and \(\mu =\lambda \) in (3.2) cancel out.

Secondly, let us consider any \(\tau \in S\) for which \(a_{\tau }(z+t\alpha )\) is singular at \(\langle \alpha , z \rangle = 0\). Equivalently, \(a_\tau ({{\widetilde{z}}})\) is singular at \(\langle \alpha , {{\widetilde{z}}} \rangle = t\alpha ^2\). Hence \((\alpha , t) \in S_\tau \) by assumption \((D_{2})\), in particular, \(\lambda \in S \cup \{0\}\) and \(t + b \in A_\alpha \cup (-A_\alpha )\). From the latter, it follows that expression (3.4) vanishes. We can restate this as

$$\begin{aligned} p(z+s\alpha + \tau ) - p(z-s\alpha + s_\alpha (\tau )) = (s-t)q(s) \end{aligned}$$

for some analytic function q(s) (\(s \in {\mathbb {C}}\)). Similarly, the condition \(p(z+t\alpha ) = p(z-t\alpha )\) at \(\langle \alpha , z \rangle = 0\) can be restated as

$$\begin{aligned} p(z+s\alpha ) - p(z-s\alpha ) = (s-t)r(s) \end{aligned}$$
(3.5)

for some analytic function r(s). Moreover, we also have

$$\begin{aligned}{} & {} p(z+s\alpha + \lambda ) - p(z-s\alpha + s_\alpha (\lambda )) = p(z-(2t-s)\alpha + s_\alpha (\tau )) - p(z + (2t-s)\alpha + \tau ) \nonumber \\{} & {} \quad = (s-t)q(2t-s). \end{aligned}$$
(3.6)

Suppose firstly that \(\lambda \ne 0\). Then in the sum (3.2) the two terms corresponding to \(\mu = \tau \) and \(\mu = \lambda \) cancel out. Indeed, they equal

$$\begin{aligned}&\lim _{s \rightarrow t} a_\tau (z+s\alpha )(s-t)(q(s)-r(s)) + a_\lambda (z+s\alpha )(s-t)(q(2t-s)-r(s)) \\&\quad = (q(t)-r(t)){{\,\textrm{res}\,}}_{\langle z, \alpha \rangle = t\alpha ^2}(a_\tau + a_\lambda ) = 0, \end{aligned}$$

because \({{\,\textrm{res}\,}}_{\langle z, \alpha \rangle = t\alpha ^2}(a_\tau + a_\lambda ) = 0\) by assumptions of part 1 with \(c = t\). Suppose now that \(\lambda = 0\), then \(r(s) = q(2t-s)\) by equalities (3.5) and (3.6). Therefore, the term corresponding to \(\mu = \tau \) in the sum (3.2) equals \( \lim _{s \rightarrow t} a_\tau (z+s\alpha )(s-t)\left( q(s)-q(2t-s)\right) = 0. \)

It follows that the sum (3.2) vanishes, as required. \(\square \)

It follows that if conditions \((D_{2})\)\((D_{3})\) and the assumptions of both parts 1 and 2 of Theorem 3.2 are satisfied for all \(\alpha \in R^r_+\), then D preserves the ring \({{\mathcal {R}}}^a\), that is \(Dp(z) \in {{\mathcal {R}}}^a\) if \(p(z) \in {{\mathcal {R}}}^a\).

Additionally, we can use the symmetry assumption \((D_{3})\) to reduce the number of conditions that we have to consider in Theorem 3.2. The following statements take place.

Lemma 3.3

Suppose that condition \((D_{3})\) holds. If \(a_\tau + a_\lambda \) has zero residue at \(\langle \alpha , z \rangle = c\alpha ^2\), then \(a_{w\tau } + a_{w\lambda }\) has zero residue at \(\langle w \alpha , z \rangle = c\alpha ^2\) for any \(w \in W\).

Proof

By the property \((D_{3})\), we have \(a_{w\tau }(z) + a_{w\lambda }(z) = a_{\tau }(w^{-1}z) + a_{\lambda }(w^{-1}z)\), therefore

$$\begin{aligned}&{{\,\textrm{res}\,}}_{\langle w\alpha , z \rangle = c\alpha ^2}(a_{w\tau }(z) + a_{w\lambda }(z) ) = \lim _{\langle w\alpha , z \rangle \rightarrow c\alpha ^2} (\langle w\alpha , z \rangle - c\alpha ^2)(a_{w\tau }(z) + a_{w\lambda }(z)) \\&\quad = \lim _{\langle \alpha , w^{-1}z \rangle \rightarrow c\alpha ^2} (\langle \alpha , w^{-1}z \rangle - c\alpha ^2)(a_{\tau }(w^{-1}z) + a_{\lambda }(w^{-1}z))\\&\qquad = {{\,\textrm{res}\,}}_{\langle \alpha , {{\widetilde{z}}} \rangle = c\alpha ^2} (a_{\tau }({{{\widetilde{z}}}}) + a_{\lambda }({{{\widetilde{z}}}})) = 0. \end{aligned}$$

\(\square \)

By combining Lemmas 3.13.3 and Theorem 3.2, we obtain the following.

Corollary 3.4

Suppose the operator (3.1) satisfies conditions \((D_{2})\) and \((D_{3})\). If the assumptions of part 1 of Theorem 3.2 are satisfied for some \(\alpha \in R^r_+\), then Dp(z) is non-singular at \(\langle w \alpha , z \rangle = c\alpha ^2\) for all \(w \in W\) and for all \(c \in {\mathbb {C}}\).

Proof

By Theorem 3.2 part 1, it suffices to check that for any \({{\widetilde{\tau }}} \in S\) and \(c\ne 0\) such that \((|w\alpha |, {{\,\textrm{sgn}\,}}(w\alpha )c) \in S_{{{\widetilde{\tau }}}}\) and such that \({{\widetilde{\lambda }}} = s_{|w\alpha |}({{\widetilde{\tau }}}) - 2c w\alpha \ne 0\), we have that the residue of \(a^{}_{{{\widetilde{\tau }}}} + a^{}_{{{\widetilde{\lambda }}}}\) at \(\langle w \alpha , z \rangle = c\alpha ^2\) is zero. Since S is W-invariant, we can write \({{\widetilde{\tau }}} = w\tau \) for some \(\tau \in S\). Lemma 3.1 then gives \((\alpha , c) \in S_\tau \). Note that \({\widetilde{\lambda }} = w\lambda \) for \(\lambda = s_\alpha (\tau ) - 2c\alpha \) (in particular, \(\lambda \ne 0\) as \({{\widetilde{\lambda }}} \ne 0\)). By assumption, part 1 of Theorem 3.2 holds for these \(\alpha \) and c, that is \({{\,\textrm{res}\,}}_{\langle \alpha , z \rangle = c\alpha ^2}(a_\tau + a_\lambda ) = 0\). Lemma 3.3 now gives what we need. \(\square \)

Lemma 3.5

Suppose the operator (3.1) satisfies conditions \((D_{2})\) and \((D_{3})\). If the assumptions of parts 1 and 2 of Theorem 3.2 are satisfied for some \(\alpha \in R^r_+\), then the assumptions of part 2 are also satisfied for \(w\alpha \) for all \(w \in W\) such that \(w\alpha \in R^r_+\).

Proof

Note that \(A_{w\alpha } = A_\alpha \). Thus we need to prove that whenever for some \(t \in A_\alpha \) and \({{\widetilde{\tau }}} \in S\) we have \(t + \langle {{\widetilde{\tau }}}, (2w\alpha )^\vee \rangle \notin A_{\alpha } \cup (-A_\alpha ) \cup \{0\}\), then either \(a_{{{\widetilde{\tau }}}}(z+tw\alpha ) = 0\) at \(\langle w\alpha , z \rangle = 0\), or else \({{\widetilde{\lambda }}} = s_{w\alpha }({\widetilde{\tau }}) - 2tw\alpha \in S\) and \(a_{{{\widetilde{\lambda }}}}(z+tw\alpha ) = a_{{{\widetilde{\tau }}}}(z+tw\alpha )\) at \(\langle w\alpha , z \rangle = 0\).

Suppose that \(t + \langle {{\widetilde{\tau }}}, (2w\alpha )^\vee \rangle \notin A_{\alpha } \cup (-A_\alpha ) \cup \{0\}\). Since S is invariant, we can write \({\widetilde{\tau }} = w \tau \) for some \(\tau \in S\). Note that then \({\widetilde{\lambda }} = w(s_\alpha (\tau ) - 2t\alpha ) = w \lambda \). Note also that \((2w\alpha )^\vee = w (2\alpha )^\vee \). Therefore \(t + \langle \tau , (2\alpha )^\vee \rangle = t + \langle {{\widetilde{\tau }}}, (2w\alpha )^\vee \rangle \notin A_{\alpha } \cup (-A_\alpha ) \cup \{0\}\). By assumption, part 2 of Theorem 3.2 holds for this \(\alpha \). Suppose firstly (case (a)) that \(a_\tau ({{{\widetilde{z}}}}+ t\alpha ) = 0\) at \(\langle \alpha , {{{\widetilde{z}}}} \rangle = 0\). By the symmetry \((D_{3})\) of the operator, at \(\langle w\alpha , z \rangle = 0\) (or, equivalently at \(\langle \alpha , w^{-1}z \rangle = 0\)) we thus get \(a_{{{\widetilde{\tau }}}}(z+tw\alpha ) = a_{\tau }(w^{-1}z + t\alpha ) = 0\), as required. Otherwise (case (b)), \(\lambda \in S\) hence \({\widetilde{\lambda }} = w \lambda \in S\) by invariance, and at \(\langle w\alpha , z \rangle = 0\) we have \(a_{{{\widetilde{\lambda }}}}(z+tw\alpha ) - a_{{{\widetilde{\tau }}}}(z+tw\alpha ) = a_\lambda (w^{-1}z + t\alpha ) - a_\tau (w^{-1}z + t\alpha ) = 0\), as required. \(\square \)

Remark 3.6

Let \(\alpha \in R^r_+\). Suppose \(w \in W\) satisfies \(w \alpha = \alpha \). Then, for any \(\tau \in S\), in part 2 of Theorem 3.2 it suffices to check the given conditions for either \(\tau \) or \(w\tau \), as one implies the other. Indeed, we have \(t + \langle w\tau , (2\alpha )^\vee \rangle = t + \langle \tau , (2\alpha )^\vee \rangle \). Also \(s_\alpha (w\tau ) - 2t\alpha = w \lambda \), and at \(\langle \alpha , z \rangle = 0\) by the symmetry \((D_{3})\) we have \(a_{w\tau }(z + t\alpha ) = a_{\tau }(w^{-1}z + t\alpha )\) and in case (b) also \(a_{w\lambda }(z + t\alpha ) = a_{\lambda }(w^{-1}z + t\alpha )\).

4 Bispectral Dual Difference Operator for \(AG_2\)

In this Section, we find a difference operator \({{{\mathcal {D}}}}_1\) satisfying the conditions of Theorem 3.2 for the configuration \(AG_2\). The corresponding axiomatics is determined by the choice

$$\begin{aligned} A_\gamma = \{1, 2, 3, \dots , m_\gamma , m_\gamma + 2, m_\gamma + 4, \dots , m_\gamma + 2m_{2\gamma } \} \end{aligned}$$
(4.1)

for all \(\gamma \in G_{2,+}\). We define a difference operator acting in the variable \(z \in {\mathbb {C}}^2\) of the form

$$\begin{aligned} {{{\mathcal {D}}}}_1 =\sum _{\tau :\frac{1}{2} \tau \in G_2} a_\tau (z)(T_\tau -1). \end{aligned}$$
(4.2)

Let W be the Weyl group of the root system \(G_2\). For \(\tau = 2\varepsilon \alpha _j, \varepsilon \in \{\pm 1\}\), \(j \in \{1,2,3\}\), we define

$$\begin{aligned} \begin{aligned} a_{2\varepsilon \alpha _j}(z)&= \prod _{\begin{array}{c} \gamma \in W\beta _1 \\ \langle 2\varepsilon \alpha _j, (2\gamma )^\vee \rangle = 3 \end{array}} \bigg (1-\frac{(3m+2)\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \bigg (1-\frac{(3m+1)\gamma ^2}{\langle \gamma , z \rangle +\gamma ^2}\bigg ) \bigg (1-\frac{3m\gamma ^2}{\langle \gamma , z \rangle +2\gamma ^2}\bigg ) \\&\quad \times \prod _{\begin{array}{c} \gamma \in W\alpha _1 \\ \langle 2\varepsilon \alpha _j, (2\gamma )^\vee \rangle = 1 \end{array}} \bigg (1-\frac{m\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \times \bigg (1-\frac{m\alpha _j^2}{\langle \varepsilon \alpha _j, z \rangle }\bigg ) \bigg (1-\frac{m\alpha _j^2}{\langle \varepsilon \alpha _j, z \rangle +\alpha _j^2}\bigg ). \end{aligned} \end{aligned}$$
(4.3)

For \(\tau = 2\varepsilon \beta _j\), we define

$$\begin{aligned} \begin{aligned} a_{2\varepsilon \beta _j}(z)&= 3 \prod _{\begin{array}{c} \gamma \in W\beta _1 \\ \langle 2\varepsilon \beta _j, (2\gamma )^\vee \rangle = 1 \end{array}} \bigg (1-\frac{(3m+2)\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \bigg (1+\frac{3m\gamma ^2}{\langle \gamma , z \rangle +2\gamma ^2}\bigg ) \bigg (1-\frac{(3m-1)\gamma ^2}{\langle \gamma , z \rangle -\gamma ^2}\bigg ) \\&\quad \times \prod _{\begin{array}{c} \gamma \in W\alpha _1 \\ \langle 2\varepsilon \beta _j, (2\gamma )^\vee \rangle = 1 \end{array}} \bigg (1-\frac{m\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \times \bigg (1-\frac{(3m+2)\beta _j^2}{\langle \varepsilon \beta _j, z \rangle }\bigg ) \bigg (1-\frac{3m\beta _j^2}{\langle \varepsilon \beta _j, z \rangle +\beta _j^2}\bigg ). \end{aligned} \end{aligned}$$
(4.4)

The following Lemma shows that these functions \(a_\tau (z)\) have \(G_2\) symmetry.

Lemma 4.1

Let \(a_\tau (z)\) be defined as in (4.3) and (4.4). Then for all \(w \in W\), we have \(w a_\tau = a_{w\tau }\).

Proof

For any \(w \in W\), we have \(w(W\alpha _1) = W\alpha _1\), \(w(W\beta _1) = W\beta _1\), and \(\langle w\tau , w\gamma \rangle = \langle \tau , \gamma \rangle \) for all \(\gamma , \tau \in {\mathbb {C}}^2\). The multiplicities are invariant, too. The statement follows. \(\square \)

Define the ring \({{\mathcal {R}}}^a_{AG_2}\) of analytic functions p(z) satisfying conditions

$$\begin{aligned} \begin{aligned}&p(z + s\alpha _j) = p(z - s\alpha _j) \text { at } \langle \alpha _j, z \rangle = 0, s=1, 2, \dots , m, \\&p(z + s\beta _j) = p(z - s\beta _j) \text { at } \langle \beta _j, z \rangle = 0, s=1, 2, \dots , 3m, 3m + 2 \end{aligned} \end{aligned}$$
(4.5)

for all \(j=1,2,3\).

Theorem 4.2

The operator (4.2) preserves the ring \({{\mathcal {R}}}^a_{AG_2}\).

Proof

One can check that this operator has the property (\(D_2\)) where \(S = 2G_2\). Let \(p(z) \in {{\mathcal {R}}}^a_{AG_2}\) be arbitrary. Without loss of generality, we put \(\omega = \sqrt{2}\). We introduce new coordinates (AB) on \({\mathbb {C}}^2\) given by \(A = \langle \alpha _1, z \rangle \) and \(B = \langle \beta _1, z \rangle \).

If \(B = 4\) (equivalently, \(\langle \beta _1, z \rangle = 2\beta _1^2\)), then \(\langle \beta _2, z \rangle = 2 + \frac{1}{2} A\), \(\langle \beta _3, z \rangle = -2 + \frac{1}{2} A\), \(\langle \alpha _2, z \rangle = -6 + \frac{1}{2} A\) and \(\langle \alpha _3, z \rangle = 6 + \frac{1}{2} A\). The only terms singular at \(B = 4\) are \(a_{-2\beta _2}\), \(a_{-2\alpha _3}\), \(a_{2\beta _3}\) and \(a_{2\alpha _2}\). Note that \(s_{\beta _1}(-2\beta _2) -4\beta _1 = -2\alpha _3\), and we compute that \({{\,\textrm{res}\,}}_{B = 4}(a_{-2\beta _2}) = - {{\,\textrm{res}\,}}_{B = 4}(a_{-2\alpha _3})\) equals

$$\begin{aligned} -\tfrac{3m(3m+2) (3m+4) (A-12-12 m)(A + 6m) (A-4+12 m) (A+12m) (A+4+12 m) (A + 12 + 12m)^2}{(A -12) (A-4) A^3 (A + 4) (A + 12)}. \end{aligned}$$

As \(s_{\alpha _1}(-2\beta _2) = 2\beta _3\) and \(s_{\alpha _1}(-2\alpha _3) = 2\alpha _2\), by Lemma 3.3 with \(w =s_{\alpha _1}\) we get that \(a_{2\beta _3} + a_{2\alpha _2}\) has zero residue at \(B = 4\), too. By Theorem 3.2 part 1, there is thus no singularity at \(B = 4\) in \( {{{\mathcal {D}}}}_1 p(z)\).

If \(B = 2\) (equivalently, \(\langle \beta _1, z \rangle = \beta _1^2\)), then \(\langle \beta _2, z \rangle = 1 + \frac{1}{2} A\), \(\langle \beta _3, z \rangle = -1 + \frac{1}{2} A\), \(\langle \alpha _2, z \rangle = -3 + \frac{1}{2} A\) and \(\langle \alpha _3, z \rangle = 3 + \frac{1}{2} A\). The only \(\tau \in 2G_2\) for which \(a_\tau \) is singular at \(B = 2\) and for which the corresponding \(\lambda = s_{\beta _1}(\tau ) - 2\beta _1 \ne 0\) are \(\tau = 2\beta _2, 2\alpha _2, -2\beta _3, -2\alpha _3\). Note that \(s_{\beta _1}(2\beta _2) -2\beta _1 = 2\alpha _2\), and we compute that \({{\,\textrm{res}\,}}_{B = 2} (a_{2\beta _2}) = - {{\,\textrm{res}\,}}_{B = 2} (a_{2\alpha _2})\) equals

$$\begin{aligned} \tfrac{6 (m+1) (3m-1) (3m+1) (A-10-12 m) (A-6-12 m) (A-2-12 m) (A+6-12m)^2 (A-6 m) (A+6+12 m)}{(A - 6)(A-2)A(A+2)(A+6)^3}. \end{aligned}$$

As \(s_{\alpha _1}(2\beta _2) = -2\beta _3\) and \(s_{\alpha _1}(2\alpha _2) = -2\alpha _3\), by Lemma 3.3 we get that \(a_{-2\beta _3} + a_{-2\alpha _3}\) has zero residue at \(B = 2\), too. By Theorem 3.2 part 1, there is thus no singularity at \(B = 2\) in \({{{\mathcal {D}}}}_1 p(z)\), nor at \(B = 0\).

It follows from the above analysis and from the form of the coefficient functions (4.3) and (4.4) that there are no singularities in \({{{\mathcal {D}}}}_1 p(z)\) at \(B = c\) for all \(c \ge 0\). By Corollary 3.4, there is also no singularity in \({{{\mathcal {D}}}}_1 p(z)\) at \(\langle \beta _i, z \rangle = c\) for all \(i=1,2,3\) and all \(c \in {\mathbb {C}}\).

The only singularity at \(A = \textrm{const} > 0\) present in the coefficients \(a_\tau \) for some \(\tau \) is at \(A = 6\) (equivalently, \(\langle \alpha _1, z \rangle = \alpha _1^2\)) when \(\tau = -2\alpha _1\). This singularity cancels in \({{{\mathcal {D}}}}_1 p(z)\) by Theorem 3.2 part 1, since the corresponding \(\lambda = s_{\alpha _1}(-2\alpha _1) - 2\alpha _1 = 0\). By Corollary 3.4, there is also no singularity in \({{{\mathcal {D}}}}_1 p(z)\) at \(\langle \alpha _i, z \rangle = c\) for all \(i=1,2,3\) and for all \(c \in \mathbb {C}\). This completes the proof that \({\mathcal D}_1 p(z)\) is analytic.

Let us now show \({{{\mathcal {D}}}}_1 p(z)\) satisfies the axiomatics of \({{\mathcal {R}}}^a_{AG_2}\). We have \(A_{\beta _i} = \{1, 2, 3, \dots , 3m, 3m+2\}\) and \(A_{\alpha _i} = \{ 1,2, \dots , m\}\) (\(i=1,2,3\)). Let us show firstly that \({{{\mathcal {D}}}}_1 p(z + t\beta _1) = {{{\mathcal {D}}}}_1 p(z-t\beta _1)\) at \(\langle \beta _1, z \rangle = 0\) for all \(t \in A_{\beta _1}\). To do so we will check condition 2 in Theorem 3.2 for all \(\tau \in 2G_2\).

Note that \((2\beta _1)^\vee = \frac{1}{2} \beta _1\). Let \(\tau = 2\beta _1\). Then \(|t + \langle \tau , (2\beta _1)^\vee \rangle | = t + 2\) which does not belong to \(A_{\beta _1} \cup \{0\}\) if and only if \(t = 3m-1\) or \(t = 3m+2\). But

$$\begin{aligned} a_{2\beta _1}(z+(3m+2)\beta _1) = a_{2\beta _1}(z+(3m-1)\beta _1) = 0 \text { at } \langle \beta _1, z \rangle = 0 \end{aligned}$$

because \(a_{2\beta _1}(z)\) contains the factors \(\left( 1-\frac{(3m+2)\beta _1^2}{\langle \beta _1, z \rangle }\right) \left( 1-\frac{3m\beta _1^2}{\langle \beta _1, z \rangle + \beta _1^2}\right) \).

Let now \(\tau = -2\beta _1\). Then \(|t + \langle \tau , (2\beta _1)^\vee \rangle | = |t - 2| \in A_{\beta _1} \cup \{0\}\) for all \(t\in A_{\beta _1}\), as needed.

Let now \(\tau = 2\beta _2\). Then \(|t + \langle \tau , (2\beta _1)^\vee \rangle | = t + 1\) which does not belong to \(A_{\beta _1} \cup \{0\}\) if and only if \(t = 3m\) or \(t = 3m+2\). But

$$\begin{aligned} a_{2\beta _2}(z+(3m+2)\beta _1) = a_{2\beta _2}(z+3m\beta _1) = 0 \text { at } \langle \beta _1, z \rangle = 0 \end{aligned}$$

because \(a_{2\beta _2}(z)\) contains the factors \(\left( 1-\frac{(3m+2)\beta _1^2}{\langle \beta _1, z \rangle }\right) \left( 1-\frac{(3m-1)\beta _1^2}{\langle \beta _1, z \rangle - \beta _1^2}\right) \).

Let now \(\tau = -2\beta _2\). Then \(|t + \langle \tau , (2\beta _1)^\vee \rangle | = t - 1\) which does not belong to \(A_{\beta _1} \cup \{0\}\) if and only if \(t = 3m+2\). But \(a_{-2\beta _2}(z+(3m+2)\beta _1) = 0\) at \(\langle \beta _1, z \rangle = 0\) because \(a_{-2\beta _2}\) contains the factor \( \left( 1+\frac{3m\beta _1^2}{-\langle \beta _1, z \rangle +2\beta _1^2} \right) \). Since \(s_{\alpha _1}(\beta _1) = \beta _1\), by Remark 3.6 there is nothing to check for \(\tau = \pm 2\beta _3 = s_{\alpha _1}(\mp 2\beta _2)\).

For \(\tau = \pm 2\alpha _1\), we get \(|t + \langle \tau , (2\beta _1)^\vee \rangle | = t \in A_{\beta _1}\), as needed. Similarly for \(\tau = 2\alpha _2\), \(|t + \langle \tau , (2\beta _1)^\vee \rangle |\) \(= |t - 3| \in A_{\beta _1} \cup \{0\}\) for all \(t \in A_{\beta _1}\), as needed.

Finally, let \(\tau = -2\alpha _2\). Then \(|t + \langle \tau , (2\beta _1)^\vee \rangle | = t + 3 \notin A_{\beta _1} \cup \{0\}\) if and only if \(t = 3m+2\), \(t = 3m\) or \(t = 3m-2\), but \(a_{-2\alpha _2}(z+t\beta _1) = 0\) at \(\langle \beta _1, z \rangle = 0\) for those t because \(a_{-2\alpha _2}\) contains the factors \(\left( 1- \frac{(3m+2)\beta _1^2}{\langle \beta _1, z \rangle } \right) \left( 1- \frac{(3m+1)\beta _1^2}{\langle \beta _1, z \rangle +\beta _1^2} \right) \left( 1- \frac{3m\beta _1^2}{\langle \beta _1, z \rangle +2\beta _1^2} \right) \). By Remark 3.6, there is nothing to check for \(\tau = \pm 2\alpha _3 = s_{\alpha _1}(\mp 2\alpha _2)\).

Let us show next that \({{{\mathcal {D}}}}_1 p(z + t\alpha _1) = {{{\mathcal {D}}}}_1 p(z-t\alpha _1)\) at \(\langle \alpha _1, z \rangle = 0\) for all \(t \in A_{\alpha _1}\). By Theorem 3.2 it is sufficient to check its condition 2 for all \(\tau \in 2G_2\).

Let \(\tau = \pm 2\beta _1\). Then \(|t + \langle \tau , (2\alpha _1)^\vee \rangle | = t \in A_{\alpha _1}\), as needed.

Let now \(\tau = 2\beta _2\). Note that \((2\alpha _1)^\vee = \frac{1}{6} \alpha _1\). Then \(|t + \langle \tau , (2\alpha _1)^\vee \rangle | = t+1 \notin A_{\alpha _1} \cup \{0\}\) if and only if \(t = m\). But \(a_{2\beta _2}(z+m\alpha _1) = 0\) at \(\langle \alpha _1, z \rangle = 0\) because \(a_{2\beta _2}\) contains the factor \(\left( 1-\frac{m\alpha _1^2}{\langle \alpha _1, z \rangle } \right) \).

Let now \(\tau = -2\beta _2\). Then \(|t + \langle \tau , (2\alpha _1)^\vee \rangle | = t-1 \in A_{\alpha _1} \cup \{0\}\) for all \(t \in A_{\alpha _1}\), as needed. Since \(s_{\beta _1}(\alpha _1) = \alpha _1\), by Remark 3.6 there is nothing to check for \(\tau = \pm 2\beta _3 = s_{\beta _1}(\pm 2\beta _2)\).

Let now \(\tau = 2\alpha _1\). Then \(|t + \langle \tau , (2\alpha _1)^\vee \rangle | = t+2 \notin A_{\alpha _1} \cup \{0\}\) if and only if \(t = m\) or \(t=m-1\). But \(a_{2\alpha _1}(z+t\alpha _1) = 0\) at \(\langle \alpha _1, z \rangle = 0\) for those t because \(a_{2\alpha _1}\) contains the factors \(\left( 1-\frac{m\alpha _1^2}{\langle \alpha _1, z \rangle } \right) \left( 1-\frac{m\alpha _1^2}{\langle \alpha _1, z \rangle + \alpha _1^2} \right) \).

Let now \(\tau = -2\alpha _1\). Then \(|t + \langle \tau , (2\alpha _1)^\vee \rangle | = |t-2| \in A_{\alpha _1} \cup \{0\}\) for all \(t \in A_{\alpha _1}\), as needed.

Let now \(\tau = 2\alpha _2\). Then \(|t + \langle \tau , (2\alpha _1)^\vee \rangle | = t+1 \notin A_{\alpha _1} \cup \{0\}\) if and only if \(t = m\). But \(a_{2\alpha _2}(z+m\alpha _1) = 0\) at \(\langle \alpha _1, z \rangle = 0\) because \(a_{2\alpha _2}\) contains the factor \(\left( 1-\frac{m\alpha _1^2}{\langle \alpha _1, z \rangle } \right) \).

Finally, for \(\tau = -2\alpha _2\) we get \(|t + \langle \tau , (2\alpha _1)^\vee \rangle | = t-1 \in A_{\alpha _1} \cup \{0\}\) for all \(t \in A_{\alpha _1}\). And by Remark 3.6 there is nothing to check for \(\tau = \pm 2\alpha _3 = s_{\beta _1}(\pm 2\alpha _2)\).

Since all the vectors \(\alpha _i, \beta _i\) are in the W-orbit of \(\alpha _1 \cup \beta _1\) the statement follows by Lemma 3.5. \(\square \)

Let us now look at the expansion of the operator (4.2) as \(\omega \rightarrow 0\). It produces the rational CMS operator in the potential-free gauge for the root system of type \(G_2\) with multiplicity m for the long roots and multiplicity \(3m+1\) for the short roots, as the next Proposition shows. Let \({\widetilde{\beta }}_j = \omega ^{-1}\beta _j\) and \({\widetilde{\alpha }}_j = \omega ^{-1}\alpha _j\) \((j=1,2,3)\) with the same multiplicities as \(\beta _j\) and \(\alpha _j\), respectively, and let \(m_{2 {{{\widetilde{\beta }}}}_j}=m_{2 \beta _j}=1\), \(m_{2 {{{\widetilde{\alpha }}}}_j}=m_{2 \alpha _j}=0\).

Proposition 4.3

We have

$$\begin{aligned} \lim _{\omega \rightarrow 0} \frac{{{{\mathcal {D}}}}_1}{72\omega ^{2}} = \Delta - \sum _{\gamma \in \{ \widetilde{\beta _i}, {\widetilde{\alpha }}_i:i=1,2,3\}} \frac{2(m_\gamma + m_{2\gamma })}{\langle \gamma , z \rangle } \partial _{\gamma }, \end{aligned}$$

where \(\Delta = \partial ^2_{z_1}+\partial ^2_{z_2}\) and \(\partial _\gamma =\langle \gamma , \partial \rangle \) is the derivative in the direction \(\gamma \).

Proof

We have \(T_{\pm 2\beta _j} - 1 = \pm \omega \partial _{2 {\widetilde{\beta }}_j} + \frac{1}{2}\omega ^2 \partial _{2{\widetilde{\beta }}_j}^2 + \dots \), and similarly for the other shifts. The terms at \(\omega \) in the expansion \(\omega \rightarrow 0\) of the operator \({{{\mathcal {D}}}}_1\) vanish. The terms that are second order in derivatives in the coefficient at \(\omega ^2\) in the expansion \(\omega \rightarrow 0\) of the operator \({{{\mathcal {D}}}}_1\) are

$$\begin{aligned}&3\sum _{j=1}^3 \partial _{2{\widetilde{\beta }}_j}^2 + \sum _{j=1}^3 \partial _{2{\widetilde{\alpha }}_j}^2 = 72 \Delta . \end{aligned}$$

Let us now consider the terms that are first order in derivatives in the coefficient at \(\omega ^2\). It is easy to see that such terms containing \(\langle {\widetilde{\beta }}_1, z \rangle ^{-1}\) are

$$\begin{aligned}&-12(3m+1)\left( 2\partial _{2{\widetilde{\beta }}_1} + \partial _{2{\widetilde{\beta }}_2} - \partial _{2{\widetilde{\beta }}_3} +\partial _{2{\widetilde{\alpha }}_3} - \partial _{2{\widetilde{\alpha }}_2}\right) = -144(m_{\beta _1} + m_{2\beta _1}) \partial _{{\widetilde{\beta }}_1}. \end{aligned}$$

Altogether, the term at \(\omega ^2\) in the expansion of the operator \({{{\mathcal {D}}}}_1\) is as required. \(\square \)

5 Construction of the Baker–Akhiezer Function for \(AG_2\)

In this Section, we employ the method from [4] to give a construction of the BA function for the configuration \(AG_2\). The BA function will be an eigenfunction for the difference operator from Sect. 4, which establishes bispectrality of the CMS \(AG_2\) Hamiltonian for integer coupling parameters.

The following Lemma gives a useful way of expanding the functions \(a_\tau \) in the operator (4.2).

Lemma 5.1

Let \(a_\tau (z)\) be defined as in (4.3) and (4.4). Then

$$\begin{aligned} a_{\tau }(z) = \kappa _\tau - \kappa _\tau \sum _{\gamma \in G_{2,+}} \frac{\langle \tau , \gamma \rangle (m_\gamma + m_{2\gamma })}{\langle \gamma , z \rangle } + R_\tau (z), \end{aligned}$$
(5.1)

where \(\kappa _{2\varepsilon \beta _j} = 3\) and \(\kappa _{2\varepsilon \alpha _j} = 1\), and \(R_\tau (z)\) is a rational function with \(\deg R_\tau \le -2\).

Proof

For the factors in \(a_\tau \) with shifted singularities at \(\langle \gamma , z \rangle + c = 0\) for \(c \ne 0\), we can use that

$$\begin{aligned} \frac{1}{\langle \gamma , z \rangle + c} = \frac{1}{\langle \gamma , z \rangle } - \frac{c}{(\langle \gamma , z \rangle + c)\langle \gamma , z \rangle }, \end{aligned}$$

which differs from \(\langle \gamma , z \rangle ^{-1}\) only by a rational function of degree \(-2\) which cannot affect the coefficient at \(\langle \gamma , z \rangle ^{-1}\). The relation (5.1) is then obtained by multiplying out the factors in each of the \(a_\tau \). \(\square \)

The next Lemma is proved by a direct computation that uses Lemma 5.1. We will apply it in the proof of Theorem 5.3 below.

Lemma 5.2

For \(\gamma \in G_{2, +}\) let \(n_\gamma \in {\mathbb {N}}\) be arbitrary. Let \(N = \sum _{\gamma \in G_{2, +} }n_\gamma \). Let

$$\begin{aligned} \mu (x) = \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau (\exp \langle x, \tau \rangle -1), \end{aligned}$$
(5.2)

where \(\kappa _\tau \) are as in Lemma 5.1. Let \(A(z) = \prod _{\gamma \in G_{2, +}} \langle \gamma , z \rangle ^{n_\gamma }\). Write \( ({{{\mathcal {D}}}}_1 - \mu (x)) [ A(z)\exp {\langle x, z \rangle }] = R(x,z)\exp \langle x, z \rangle \) for some rational function R(xz) in z, which has degree less than or equal to N. Then

$$\begin{aligned} R(x, z) = \sum _{\gamma \in G_{2, +}} \big (n_\gamma -(m_\gamma + m_{2\gamma })\big ) \left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle }\right) A(z)\langle \gamma , z \rangle ^{-1} + S(x, z)\end{aligned}$$

for some rational function S(xz) in z with degree less than or equal to \(N-2\).

Proof

By making use of the expression for \(a_\tau (z)\) given in Lemma 5.1 we get

$$\begin{aligned}&{{{\mathcal {D}}}}_1 [ A(z) \exp {\langle x, z \rangle }] = \sum _{\tau :\frac{1}{2} \tau \in G_2} a_\tau (z)(T_\tau -1) [A(z) \exp {\langle x, z \rangle }] \\&\quad = \exp {\langle x, z \rangle }\sum _{\tau :\frac{1}{2} \tau \in G_2} a_\tau (z)\bigg ( \exp {\langle x, \tau \rangle }\prod _{\gamma \in G_{2, +}} (\langle \gamma , z \rangle + \langle \tau , \gamma \rangle )^{n_\gamma } - A(z) \bigg ) \\&\quad = A(z)\exp {\langle x, z \rangle } \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \left( 1 - \sum _{\gamma \in G_{2,+}} \langle \tau , \gamma \rangle (m_\gamma + m_{2\gamma })\langle \gamma , z \rangle ^{-1} + \text {l.o.t.}\right) \\&\qquad \times \left( \exp {\langle x, \tau \rangle }\bigg (1 + \sum _{\gamma \in G_{2, +}} n_\gamma \langle \tau , \gamma \rangle \langle \gamma , z \rangle ^{-1} + \text {l.o.t.}\bigg ) - 1\right) \\&\quad = A(z)\exp \langle x, z \rangle \left( \mu (x) + \sum _{\gamma \in G_{2, +}} \big (n_\gamma -(m_\gamma + m_{2\gamma })\big ) \left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle }\right) \right. \\&\qquad \left. \langle \gamma , z \rangle ^{-1} + \text {l.o.t.}\right) , \end{aligned}$$

where \(\text {l.o.t.}\) denotes lower degree terms in z, and where we used that \(\sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle = 0\) for all \(\gamma \in G_{2, +}\), since if \(\frac{1}{2}\tau \in G_2\) then also \(-\frac{1}{2}\tau \in G_2\) and \(\kappa _\tau = \kappa _{-\tau }\). \(\square \)

We are ready to give the main result of this Section.

Theorem 5.3

Let \(M = \sum _{\gamma \in AG_{2, +}} m_\gamma = 12m+3.\) Let

$$\begin{aligned} c(x) = \frac{M!}{8} \prod _{\gamma \in G_{2, +}} \left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle } \right) ^{m_\gamma + m_{2\gamma }} \end{aligned}$$
(5.3)

for \(x \in {\mathbb {C}}^2\). Define the polynomial

$$\begin{aligned} Q(z) = \prod _{\begin{array}{c} \gamma \in G_{2, +} \\ s \in A_\gamma \end{array}} \left( \langle \gamma , z \rangle ^2-s^2\gamma ^2\right) , \end{aligned}$$
(5.4)

\(z \in {\mathbb {C}}^2\), where \(A_\gamma \) is given by (4.1). Then the function

$$\begin{aligned} \psi (z, x) = c^{-1}(x) ({{{\mathcal {D}}}}_1 - \mu (x))^M [Q(z) \exp \langle z, x \rangle ], \end{aligned}$$
(5.5)

where \(\mu (x)\) is given by (5.2), is the BA function for \(R=AG_2\). Moreover, \(\psi \) is also an eigenfunction of the operator \({{{\mathcal {D}}}}_1\) with \({{{\mathcal {D}}}}_1 \psi = \mu (x) \psi \), thus bispectrality holds.

Proof

The operator \({{{\mathcal {D}}}}_1\) preserves the ring \({{\mathcal {R}}}^a_{AG_2}\) by Theorem 4.2. The function \(Q(z)\exp \langle z, x \rangle \) belongs to \({{\mathcal {R}}}^a_{AG_2}\) since it is analytic and satisfies conditions (4.5) given that \(Q(z + s\gamma ) = Q(z - s\gamma ) = 0\) at \(\langle \gamma , z \rangle = 0\), \(s \in A_\gamma \), for all \(\gamma \in G_{2, +}\). Since \({{{\mathcal {D}}}}_1\) preserves \({{\mathcal {R}}}^a_{AG_2}\), so does \({{{\mathcal {D}}}}_1 - \mu (x)\), hence \(\psi (z, x)\) given by (5.5) belongs to \({{\mathcal {R}}}^a_{AG_2}\). Its analyticity and the form of the functions \(a_\tau \) imply that it equals \(c^{-1}(x)P(z, x) \exp \langle z, x \rangle \) for some polynomial P(zx) in z. To prove that \(\psi (z, x)\) satisfies the definition of the BA function, it thus suffices to calculate the highest degree term in P(zx).

The highest degree term in Q(z) is \(Q_0(z) = \prod _{\gamma \in G_{2, +}} \langle \gamma , z \rangle ^{2(m_\gamma + m_{2\gamma })}\) and \(\deg Q_0 = 2M\). For all \(k \in {\mathbb {N}}\) with \(k \le M\), an analogous argument as above gives that \(({{{\mathcal {D}}}}_1 - \mu (x))^k[Q(z)\exp \langle z, x \rangle ]\) belongs to \({{\mathcal {R}}}^a_{AG_2}\) and is of the form \(Q^{(k)}(z, x) \exp \langle z, x \rangle \) for some polynomial \(Q^{(k)}(z, x)\) in z. Let its highest-degree homogeneous component be \(Q_0^{(k)}(z, x)\). Lemma 5.2 allows to compute \(Q_0^{(k)}(z, x)\).

Lemma 5.2 gives that after the first application of \({{{\mathcal {D}}}}_1-\mu (x)\) onto \(Q(z)\exp \langle z, x \rangle \) we get

$$\begin{aligned} Q_0^{(1)} = \sum _{\gamma \in G_{2, +}}(m_\gamma + m_{2\gamma })\left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle }\right) \langle \gamma , z \rangle ^{-1}Q_0(z). \end{aligned}$$

The second application gives

$$\begin{aligned}&Q_0^{(2)} = \sum _{\gamma \in G_{2, +}}(m_\gamma + m_{2\gamma })(m_\gamma + m_{2\gamma }-1)\left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle }\right) ^2\langle \gamma , z \rangle ^{-2}Q_0(z) \\&\quad +\sum _{\begin{array}{c} \gamma \ne \delta \\ \gamma , \delta \in G_{2, +} \end{array}}(m_\gamma + m_{2\gamma })(m_\delta + m_{2\delta }) \left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle }\right) \\&\quad \times \left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \delta \rangle \exp {\langle x, \delta \rangle }\right) \langle \gamma , z \rangle ^{-1}\langle \delta , z \rangle ^{-1}Q_0(z). \end{aligned}$$

By repeatedly applying Lemma 5.2 we get

$$\begin{aligned} Q_0^{(k)} = \sum _{{{\textbf{n}}}} f_{{{\textbf{n}}}}(x) Q_0(z) \prod _{\gamma \in G_{2, +}}\langle \gamma , z \rangle ^{-n_\gamma } \end{aligned}$$

where \({{\textbf{n}}} = (n_\gamma )_{\gamma \in G_{2,+}}\) for \(n_\gamma \in {\mathbb {Z}}_{\ge 0}\) such that \(n_\gamma \) add up to k and where \(f_\mathbf{{n}}(x)\) is non-zero only if \(n_\gamma \le m_\gamma + m_{2\gamma }\) for all \(\gamma \). It follows that \(\deg P \le M\) and that the highest degree term of P(zx) is

$$\begin{aligned} d(x) \prod _{\gamma \in G_{2, +}} \langle \gamma , z \rangle ^{m_\gamma + m_{2\gamma }} = \frac{1}{8} d(x)\prod _{\gamma \in AG_{2,+}} \langle \gamma , z \rangle ^{m_\gamma } \end{aligned}$$

for some function d(x). It also implies that the polynomial part of \(({{{\mathcal {D}}}}_1 - \mu (x))^{M + 1}[Q(z)\exp \langle z, x \rangle ]\) has degree less than M hence vanishes as a consequence of Lemma 2.6, giving \({{{\mathcal {D}}}}_1 \psi = \mu (x)\psi \). So to complete the proof we just need to verify that c(x) given by (5.3) equals \(\frac{1}{8}d(x)\).

To arrive at \(\prod _{\gamma \in G_{2, +}} \langle \gamma , z \rangle ^{m_\gamma + m_{2\gamma }}\) starting from \(Q_0(z)\) we overall need to reduce the power on each of the factors \(\langle \gamma , z \rangle \) by \(m_\gamma + m_{2\gamma }\) and we do this by reducing the power of one of them by one at each step. The total number of possible orderings of doing that corresponds to the number of words of length M in the alphabet \(G_{2, +}\) such that \(\gamma \) appears in the word \(m_\gamma + m_{2\gamma }\) times for each \(\gamma \in G_{2, +}\). This gives

$$\begin{aligned} \frac{M!}{\prod _{\gamma \in G_{2, +}}(m_\gamma + m_{2\gamma })!} \end{aligned}$$

possibilities, and for each of them the total proportionality factor picked up equals by Lemma 5.2

$$\begin{aligned} \prod _{\gamma \in G_{2, +}}(m_\gamma + m_{2\gamma })! \left( \sum _{\tau :\frac{1}{2} \tau \in G_2} \kappa _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle } \right) ^{m_\gamma + m_{2\gamma }}. \end{aligned}$$

It follows that c(x) has the required form. \(\square \)

Remark 5.4

We expect bispectrality of system \(AG_2\) to hold for any value of the parameter m by adopting arguments from [4] for non-integer parameters.

6 Another Dual Operator

In this Section, we present another difference operator for the configuration \(AG_2\) which preserves the quasi-invariants. We also give the corresponding second construction of the BA function.

We define a difference operator acting in the variable \(z \in {\mathbb {C}}^2\) of the form

$$\begin{aligned} {{{\mathcal {D}}}}_2 =\sum _{\tau :\frac{1}{2} \tau \in AG_2} a_\tau (z)(T_\tau -1). \end{aligned}$$
(6.1)

We now proceed to specify the functions \(a_\tau (z)\). Let \(\lambda _\tau = g_{\tau /2}\) where g is defined in terms of the multiplicity map of \(AG_2\) in accordance with the convention (1.4) for couplings. That is, we set

$$\begin{aligned} \lambda _\tau = \frac{1}{4} m_{\frac{1}{2} \tau }(m_{\frac{1}{2} \tau } + 2m_{\tau } +1)\tau ^2. \end{aligned}$$

That means \(\lambda _{4\varepsilon \beta _j} = 8\beta _j^2\), \(\lambda _{2\varepsilon \beta _j} = 9m(m+1) \beta _j^2\) and \(\lambda _{2\varepsilon \alpha _j} = m(m+1)\alpha _j^2\) (\(j=1,2,3\), \(\varepsilon \in \{ \pm 1\}\), \(m \in {\mathbb {N}}\)). For \(\tau = 2\varepsilon \alpha _j\) we define

$$\begin{aligned} \begin{aligned} a_{2\varepsilon \alpha _j}(z)&= \lambda _{2\varepsilon \alpha _j} \prod _{\begin{array}{c} \gamma \in W\beta _1 \\ \langle 2\varepsilon \alpha _j, (2\gamma )^\vee \rangle = 3 \end{array}} \bigg (1-\frac{(3m+2)\gamma ^2}{\langle \gamma , z \rangle }\bigg )\\&\quad \times \bigg (1-\frac{(3m+1)\gamma ^2}{\langle \gamma , z \rangle +\gamma ^2}\bigg ) \bigg (1-\frac{3m\gamma ^2}{\langle \gamma , z \rangle +2\gamma ^2}\bigg ) \\&\quad \times \prod _{\begin{array}{c} \gamma \in W\beta _1 \\ \langle 2\varepsilon \alpha _j, (2\gamma )^\vee \rangle = 0 \end{array}} \bigg (1-\frac{6\gamma ^2}{\langle \gamma , z \rangle -\gamma ^2}\bigg ) \prod _{\begin{array}{c} \gamma \in W\alpha _1 \\ \langle 2\varepsilon \alpha _j, (2\gamma )^\vee \rangle = 1 \end{array}} \bigg (1-\frac{m\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \\&\quad \times \bigg (1-\frac{m\alpha _j^2}{\langle \varepsilon \alpha _j, z \rangle }\bigg ) \bigg (1-\frac{m\alpha _j^2}{\langle \varepsilon \alpha _j, z \rangle +\alpha _j^2}\bigg ). \end{aligned} \end{aligned}$$
(6.2)

For \(\tau = 4\varepsilon \beta _j\) we define

$$\begin{aligned} \begin{aligned} a_{4\varepsilon \beta _j}(z)&= \lambda _{4\varepsilon \beta _j} \prod _{\begin{array}{c} \gamma \in W\alpha _1 \\ \langle 4\varepsilon \beta _j, (2\gamma )^\vee \rangle = 2 \end{array}} \bigg (1-\frac{m\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \bigg (1-\frac{m\gamma ^2}{\langle \gamma , z \rangle +\gamma ^2}\bigg ) \\&\quad \times \prod _{\begin{array}{c} \gamma \in W\beta _1 \\ \langle 4\varepsilon \beta _j, (2\gamma )^\vee \rangle = 2 \end{array}} \bigg (1-\frac{(3m+2)\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \bigg (1-\frac{3m\gamma ^2}{\langle \gamma , z \rangle +\gamma ^2}\bigg ) \\&\quad \times \bigg (1-\frac{(3m+2)\beta _j^2}{\langle \varepsilon \beta _j, z \rangle }\bigg ) \bigg (1-\frac{3m\beta _j^2}{\langle \varepsilon \beta _j, z \rangle +\beta _j^2}\bigg )\\&\quad \times \bigg (1-\frac{(3m+2)\beta _j^2}{\langle \varepsilon \beta _j, z \rangle +2\beta _j^2}\bigg ) \bigg (1-\frac{3m\beta _j^2}{\langle \varepsilon \beta _j, z \rangle +3\beta _j^2}\bigg ). \end{aligned} \end{aligned}$$
(6.3)

For \(\tau = 2\varepsilon \beta _j\) we define

$$\begin{aligned}&a_{2\varepsilon \beta _j}(z) = \lambda _{2\varepsilon \beta _j} \prod _{\begin{array}{c} \gamma \in W\alpha _1 \\ \langle 2\varepsilon \beta _j, (2\gamma )^\vee \rangle = 0 \end{array}} \bigg (1-\dfrac{\frac{2}{3} \gamma ^2}{\langle \gamma , z \rangle -\gamma ^2}\bigg ) \prod _{\begin{array}{c} \gamma \in W\alpha _1 \\ \langle 2\varepsilon \beta _j, (2\gamma )^\vee \rangle = 1 \end{array}} \bigg (1-\dfrac{m\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \nonumber \\&\quad \times \prod _{\begin{array}{c} \gamma \in W\beta _1 \\ \langle 2\varepsilon \beta _j, (2\gamma )^\vee \rangle = 1 \end{array}} \bigg (1-\dfrac{(3m+2)\gamma ^2}{\langle \gamma , z \rangle }\bigg ) \bigg (1+\dfrac{3m\gamma ^2}{\langle \gamma , z \rangle +2\gamma ^2}\bigg ) \bigg (1-\dfrac{(3m-1)\gamma ^2}{\langle \gamma , z \rangle -\gamma ^2}\bigg )\nonumber \\&\quad \times \bigg (1-\dfrac{(3m+2)\beta _j^2}{\langle \varepsilon \beta _j, z \rangle }\bigg ) \bigg (1-\dfrac{3m\beta _j^2}{\langle \varepsilon \beta _j, z \rangle +\beta _j^2}\bigg )\nonumber \\&\qquad \times \bigg (1+\dfrac{4\beta _j^2}{\langle \varepsilon \beta _j, z \rangle +3\beta _j^2}\bigg ) \bigg (1-\dfrac{4\beta _j^2}{\langle \varepsilon \beta _j, z \rangle -\beta _j^2}\bigg ). \end{aligned}$$
(6.4)

The next Lemma shows that the functions \(a_\tau (z)\) have \(G_2\) symmetry.

Lemma 6.1

Let \(a_\tau (z)\) be defined as above. Then for all \(w \in W\), we have \(w a_\tau = a_{w\tau }\).

Proof

For any \(w \in W\), \(\lambda _{w\tau } = \lambda _\tau \) for any \(\tau \) such that \(\frac{1}{2} \tau \in AG_2\). The statement follows as in the proof of Lemma 4.1. \(\square \)

Theorem 6.2

The operator (6.1) preserves the ring \({{\mathcal {R}}}^a_{AG_2}\).

Proof

One can check that the operator satisfies condition \((D_{2})\). Let \(p(z) \in {{\mathcal {R}}}^a_{AG_2}\) be arbitrary. Without loss of generality, we put \(\omega = \sqrt{2}\). We introduce new coordinates (AB) on \({\mathbb {C}}^2\) given by \(A = \langle \alpha _1, z \rangle \) and \(B = \langle \beta _1, z \rangle \).

It follows from the form of the coefficient functions (6.2), (6.3) and (6.4) and Theorem 3.2 that there are no singularities in \({{{\mathcal {D}}}}_2p(z)\) at \(B = c\) for \(c \ge 0\) unless \(B=2, 4, 6\). Let us consider these cases.

If \(B = 6\) (equivalently, \(\langle \beta _1, z \rangle = 3\beta _1^2\)), then \(\langle \beta _2, z \rangle = 3 + \frac{1}{2} A\), \(\langle \beta _3, z \rangle = -3 + \frac{1}{2} A\), \(\langle \alpha _2, z \rangle = -9 + \frac{1}{2} A\) and \(\langle \alpha _3, z \rangle = 9 + \frac{1}{2} A\). The only terms singular at \(B=6\) are \(a_{-4\beta _1}\) and \(a_{-2\beta _1}\). We note that \(s_{\beta _1}(-4\beta _1) - 6\beta _1 = -2\beta _1\), and we compute that \({{\,\textrm{res}\,}}_{B = 6}(a_{-4\beta _1}) = - {{\,\textrm{res}\,}}_{B = 6}(a_{-2\beta _1})\) equals

$$\begin{aligned} \frac{\begin{matrix}48 m (m+1) (3m+2) (3m+5) (A - 2- 12m) (A - 6- 12m) (A - 14- 12m)\\ \times (A - 18- 12m) (A+2+12 m) (A+6+12 m) (A + 14+12 m) (A+18+12 m)\end{matrix}}{(A - 18)(A - 6)^2(A - 2)(A +2)(A +6)^2(A + 18)}. \end{aligned}$$

Therefore, by Theorem 3.2 part 1, there is no singularity at \(B = 6\) in \({{{\mathcal {D}}}}_2p(z)\).

If \(B = 4\) (equivalently, \(\langle \beta _1, z \rangle = 2\beta _1^2\)), then \(\langle \beta _2, z \rangle = 2 + \frac{1}{2} A\), \(\langle \beta _3, z \rangle = -2 + \frac{1}{2} A\), \(\langle \alpha _2, z \rangle = -6 + \frac{1}{2} A\) and \(\langle \alpha _3, z \rangle = 6 + \frac{1}{2} A\). The only \(\tau \in AG_2\) for which \(a_\tau \) is singular at \(B = 4\) and for which the corresponding \(\lambda = s_{\beta _1}(\tau ) - 4\beta _1 \ne 0\) are \(\tau = -2\beta _2, -2\alpha _3, 2\beta _3\) and \(2\alpha _2\). We note that \(s_{\beta _1}(-2\beta _2) - 4\beta _1 = -2\alpha _3\), and we compute that \({{\,\textrm{res}\,}}_{B = 4}(a_{-2\beta _2}) = - {{\,\textrm{res}\,}}_{B = 4}(a_{-2\alpha _3})\) equals

$$\begin{aligned} -\frac{\begin{matrix} 18m^2(m+1) (3m+2) (3m+4) (A-32) (A + 24) (A-12-12 m)(A + 6m) \\ \times (A-4+12 m) (A+12m) (A+4+12 m) (A + 12 + 12m)^2\end{matrix}}{(A -12) (A-8) (A-4) A^4 (A + 4) (A + 12)}. \end{aligned}$$

Since \(s_{\alpha _1}(-2\beta _2) = 2\beta _3\) and \(s_{\alpha _1}(-2\alpha _3) = 2\alpha _2\), by Lemma 3.3 the residue of \(a_{2\beta _3} + a_{2\alpha _2}\) at \(B = 4\) is also zero. Thus, by Theorem 3.2 part 1, there is no singularity at \(B = 4\) in \({{{\mathcal {D}}}}_2p(z)\).

If \(B = 2\) (equivalently, \(\langle \beta _1, z \rangle = \beta _1^2\)), then \(\langle \beta _2, z \rangle = 1 + \frac{1}{2} A\), \(\langle \beta _3, z \rangle = -1 + \frac{1}{2} A\), \(\langle \alpha _2, z \rangle = -3 + \frac{1}{2} A\) and \(\langle \alpha _3, z \rangle = 3 + \frac{1}{2} A\). The only \(\tau \in AG_2\) for which \(a_\tau \) is singular at \(B = 2\) and for which the corresponding \(\lambda = s_{\beta _1}(\tau ) - 2\beta _1 \ne 0\) are \(\tau = -4\beta _1, 2\beta _1, -4\beta _2, 4\beta _3, \pm 2\alpha _1, 2\beta _2, 2\alpha _2, -2\beta _3\) and \(-2\alpha _3\). We note that \(s_{\beta _1}(-4\beta _1) - 2\beta _1 = 2\beta _1\), and we compute that \({{\,\textrm{res}\,}}_{B = 2}(a_{-4\beta _1}) = - {{\,\textrm{res}\,}}_{B = 2} (a_{2\beta _1})\) equals

$$\begin{aligned} \frac{\begin{matrix} 144 m(m+1) (3m-2) (3m+1) (A +6-12m) (A +2-12m) (A -6-12m) \\ \times (A-10-12m) (A-6+12 m) (A-2+12 m) (A+6+12 m) (A+10+12 m)\end{matrix}}{(A-6)^2(A-2)^2(A+2)^2(A+6)^2}. \end{aligned}$$

Similarly, we note that \(s_{\beta _1}(-4\beta _2) - 2\beta _1 = -2\alpha _1\), and we compute that \({{\,\textrm{res}\,}}_{B = 2}(a_{-4\beta _2}) = - {{\,\textrm{res}\,}}_{B = 2}(a_{-2\alpha _1})\) equals

$$\begin{aligned} \frac{\begin{matrix} 288 m (m+1) (A-6+6 m) (A+6 m) (A-10+12 m) (A-6+12m)^2\\ \times (A-2+12 m) (A+2+12 m) (A+6+12m)^2(A+10+12 m)\end{matrix}}{(A-10)(A-6)^4(A-2)^2A(A+2)(A+6)}. \end{aligned}$$

Since \(s_{\alpha _1}(-4\beta _2) = 4\beta _3\) and \(s_{\alpha _1}(-2\alpha _1) = 2\alpha _1\), it follows by Lemma 3.3 that the residue of \(a_{4\beta _3} + a_{2\alpha _1}\) at \(B=2\) is also zero. Next we note that \(s_{\beta _1}(2\beta _2) - 2\beta _1 = 2\alpha _2\), and we compute that \({{\,\textrm{res}\,}}_{B = 2}(a_{2\beta _2}) = - {{\,\textrm{res}\,}}_{B = 2}(a_{2\alpha _2})\) equals

$$\begin{aligned} \frac{\begin{matrix} 36 m (m+1)^2 (3m-1) (3m+1) (A-26) (A+30) (A-10-12 m) (A-6-12 m) \\ \times (A-2-12 m) (A+6-12m)^2 (A-6 m) (A+6+12 m)\end{matrix}}{(A - 6)(A-2)^2A(A+2)(A+6)^4}. \end{aligned}$$

Since \(s_{\alpha _1}(2\beta _2) = -2\beta _3\) and \(s_{\alpha _1}(2\alpha _2) = -2\alpha _3\), it follows by Lemma 3.3 that the residue of \(a_{ -2\beta _3} + a_{-2\alpha _3}\) at \(B=2\) is also zero. Thus, by Theorem 3.2 part 1 there is no singularity at \(B = 2\) in \({{{\mathcal {D}}}}_2p(z)\).

Let us now consider possible singularities in \({{{\mathcal {D}}}}_2p(z)\) at \(A=c \ge 0\). By Theorem 3.2 part 1 and the form of the coefficients (6.2)–(6.4) it is sufficient to consider the case \(A=6\) (equivalently, \(\langle \alpha _1, z \rangle = \alpha _1^2\)). In this case \(\langle \beta _2, z \rangle = \frac{1}{2} B + 3\), \(\langle \beta _3, z \rangle = -\frac{1}{2} B + 3\), \(\langle \alpha _2, z \rangle = -\frac{3}{2} B + 3\) and \(\langle \alpha _3, z \rangle = \frac{3}{2} B + 3\). The only \(\tau \in AG_2\) for which \(a_\tau \) is singular at \(A = 6\) and for which the corresponding \(\lambda = s_{\alpha _1}(\tau ) - 2\alpha _1 \ne 0\) are \(\tau = -4\beta _2, -4\beta _3\) and \(\pm 2\beta _1\). We note that \(s_{\alpha _1}(-4\beta _2) - 2\alpha _1 = -2\beta _1\), and we compute that \({{\,\textrm{res}\,}}_{A = 6}(a_{-4\beta _2}) = - {{\,\textrm{res}\,}}_{A = 6}(a_{-2\beta _1})\) equals

$$\begin{aligned} \frac{\begin{matrix}96 m(m+1) (B-14-12 m) (B-2-12 m) (B-2+4 m) (B+ 2+4 m) (B-2+6 m) \\ \times (B+4+6 m) (B-6+12 m) (B+2+12 m) (B+6+12 m) (B+14+12 m)\end{matrix}}{(B-6)^2(B - 2)^4 B (B+2)^2(B + 6)}. \end{aligned}$$

Since \(s_{\beta _1}(-4\beta _2) = -4\beta _3\) and \(s_{\beta _1}(-2\beta _1) = 2\beta _1\), it follows by Lemma 3.3 that the residue of \(a_{-4\beta _3} + a_{2\beta _1}\) at \(A = 6\) is also zero. By Theorem 3.2 part 1 there is thus no singularity at \(A = 6\) in \({{{\mathcal {D}}}}_2p(z)\).

By Corollary 3.4 it follows that \({{{\mathcal {D}}}}_2p(z)\) has no singularities. The proof that \({\mathcal D}_2p(z)\) belongs to \({{\mathcal {R}}}^a_{AG_2}\) can be completed in an analogous way to how it was done for the operator (4.2) in the proof of Theorem 4.2. \(\square \)

We now give a second construction of the Baker–Akhiezer Function for \(AG_2\).

Theorem 6.3

Let \(M = \sum _{\gamma \in AG_{2, +}} m_\gamma = 12m+3.\) Let

$$\begin{aligned} \mu (x) = \sum _{\tau :\frac{1}{2} \tau \in AG_2} \lambda _\tau (\exp \langle x, \tau \rangle -1), \end{aligned}$$

where \(x \in {\mathbb {C}}^2\), and let

$$\begin{aligned} c(x) = \frac{M!}{8} \prod _{\gamma \in G_{2, +}} \left( \sum _{\tau :\frac{1}{2} \tau \in AG_2} \lambda _\tau \langle \tau , \gamma \rangle \exp {\langle x, \tau \rangle } \right) ^{m_\gamma + m_{2\gamma }}. \end{aligned}$$
(6.5)

Then the function

$$\begin{aligned} \psi (z, x) = c^{-1}(x) ({{{\mathcal {D}}}}_2 - \mu (x))^M [Q(z) \exp \langle z, x \rangle ], \end{aligned}$$
(6.6)

where polynomial Q(z) is given by (5.4), is the BA function for \(R=AG_2\). Moreover, \(\psi \) is also an eigenfunction of the operator \({{{\mathcal {D}}}}_2\) with \({{{\mathcal {D}}}}_2 \psi = \mu (x) \psi \), thus bispectrality holds.

The proof is similar to the proof of Theorem 5.3 and it can be found in [30].

Remark 6.4

One can show that operators (4.2)–(4.4), (6.1)–(6.4) commute: \([{{{\mathcal {D}}}}_1, {{{\mathcal {D}}}}_2]=0\). This can be proven by taking the rational limit of the more general trigonometric versions of these operators, which also commute [14] Another approach is to use the relation \([{\mathcal D}_1, {\mathcal D}_2]\psi =0\).

7 Relation with \(A_2\) and \(A_1\) Macdonald–Ruijsenaars Systems

In the case when \(m=0\), the configuration \(AG_2\) reduces to the root system \(\{\pm 2\beta _i:i=1,2,3\}\) of type \(A_2\) with multiplicity 1 for all vectors. In this limit, the operator (6.1) reduces to the quasiminuscule operator for (twice) this root system. Let us now consider the \(m=0\) limit of the operator (4.2). After a rescaling, this gives an operator of the form

$$\begin{aligned} D_0 = \sum _{\tau \in G_2} a_{\tau , 0}(z) (T_\tau - 1) = -24 + \sum _{\tau \in G_2} a_{\tau , 0}(z) T_\tau , \end{aligned}$$
(7.1)

where for \(\tau = \varepsilon \beta _j\), \(\varepsilon \in \{ \pm 1\}\), \(j=1, 2, 3,\) we have

$$\begin{aligned} a_{\tau , 0}(z) = 3 \prod _{\begin{array}{c} \gamma \in W \beta _1 \\ \langle \tau , (2\gamma )^\vee \rangle = \frac{1}{2} \end{array}} \left( 1-\frac{\frac{1}{2} \gamma ^2}{\langle \gamma , z \rangle - \frac{1}{2}\gamma ^2} \right) \prod _{\begin{array}{c} \gamma \in W \beta _1 \\ \langle \tau , (2\gamma )^\vee \rangle = 1 \end{array}} \left( 1-\frac{\gamma ^2}{\langle \gamma , z \rangle }\right) , \end{aligned}$$

and for \(\tau = \varepsilon \alpha _j\) we have

$$\begin{aligned} a_{\tau , 0}(z) = \prod _{\begin{array}{c} \gamma \in W \beta _1 \\ \langle \tau , (2\gamma )^\vee \rangle = \frac{3}{2} \end{array}} \left( 1-\frac{\frac{3}{2} \gamma ^2}{\langle \gamma , z \rangle + \frac{1}{2}\gamma ^2} \right) . \end{aligned}$$

Proposition 7.1

The operator (7.1) preserves the ring of analytic functions p(z) satisfying \(p(z+\beta _i)\) \(=p(z-\beta _i)\) at \(\langle \beta _i, z \rangle = 0\) for all \(i=1, 2, 3\).

The proof is parallel to the proof of Theorem 4.2. In this case, though, the condition 2(b) of Theorem 3.2 is needed while it does not play a role in the proofs of Theorems 4.2 and 6.2.

Let us rewrite the operator \(D_0\) for the more standard realisation of the root system \(A_2\) given by \(A_2 = \{ e_i - e_j:1 \le i \ne j \le 3 \} \subset {\mathbb {R}}^3\), where \(e_i\) are the standard basis vectors.

Proposition 7.2

Define the set \(S = S_1 \cup S_2\), where

$$\begin{aligned}{} & {} S_1 = \{3e_i :i = 1, 2, 3 \} \cup \{2e_i + 2e_j - e_k: 1 \le i < j \ne k \le 3, i \ne k \} \text { and } \\{} & {} \quad S_2 = \{ 2e_i + e_j:1 \le i \ne j\le 3\}. \end{aligned}$$

Then the operator acting in the variable \(z = (z_1, z_2, z_3)\in {\mathbb {C}}^3\) given by

$$\begin{aligned} \begin{aligned} {{{\widetilde{D}}}}_0&= 3\sum _{\tau \in S_2} \left( \prod _{\begin{array}{c} i \ne j \\ \langle \tau , e_i-e_j \rangle = 1 \end{array}} \left( 1-\frac{1}{z_i - z_j-1} \right) \prod _{\begin{array}{c} i \ne j \\ \langle \tau , e_i-e_j \rangle = 2 \end{array}} \left( 1-\frac{2}{z_i - z_j} \right) \right) T_\tau \\&+\sum _{\tau \in S_1}\left( \prod _{\begin{array}{c} i \ne j \\ \langle \tau , e_i - e_j \rangle = 3 \end{array}} \left( 1-\frac{3}{z_i - z_j+1} \right) \right) T_\tau \end{aligned} \end{aligned}$$
(7.2)

preserves the ring of analytic functions p(z) satisfying \(p(z+e_i - e_j) = p(z-e_i + e_j)\) at \(z_i=z_j\) for all \(i, j =1, 2, 3\).

Note that \(S_1 = \{\tau \in S:|\langle \tau , e_i - e_j \rangle | \in \{ 0, 3\} \text { for all } i, j=1, 2, 3\}\) and \(S_2 = \{\tau \in S:|\langle \tau , e_i - e_j \rangle | \in \{0, 1, 2\} \text { for all } i, j=1, 2, 3\}\).

Let us now consider a version of the operator (7.2) for the root system \(A_1\). Let \(\sim \) denote equality of operators when acting on functions constant along the direction normal to the hyperplane \(z_1 + z_2 = 0\).

Proposition 7.3

Let \(S_1' = \{3e_1, 3e_2\}\) and \(S_2' = \{2e_1 + e_2, e_1 + 2e_2\}.\) Then formula (7.2) after replacement of \(S_i\) with \(S_i'\), \(i=1, 2\), gives an operator \({{{\widehat{D}}}}_0\) acting in the variable \(z = (z_1, z_2) \in {\mathbb {C}}^2\) that preserves the ring \({{\mathcal {R}}}^a_{A_1}\) of analytic functions p(z) satisfying \(p(z+e_1 - e_2) = p(z-e_1 + e_2)\) at \(z_1=z_2\). Moreover, if we split the operator \({{{\widehat{D}}}}_0 = D_1 + D_2\), where

$$\begin{aligned} D_1 = 3\left( 1 - \frac{1}{z_2 - z_1 - 1} \right) T_{e_1 + 2e_2} + \left( 1-\frac{3}{z_1 - z_2 +1} \right) T_{3e_1} \end{aligned}$$

and

$$\begin{aligned} D_2 = 3 \left( 1 - \frac{1}{z_1 - z_2 - 1} \right) T_{2e_1 + e_2} + \left( 1-\frac{3}{z_2 - z_1 + 1} \right) T_{3e_2}, \end{aligned}$$

then \(D_i({{\mathcal {R}}}^a_{A_1}) \subseteq {{\mathcal {R}}}^a_{A_1}\) for \(i=1, 2\). The operators \(D_i\) satisfy commutativity relations

$$\begin{aligned}{}[D_1, D_2] = [D_1, D^{msl}] = [D_2, D^{msl}] = 0, \end{aligned}$$

where \(D^{msl}\) is the operator for the minuscule weight \(2 e_1\) of the root system \(2 A_1\) with multiplicity 1 given by

$$\begin{aligned} D^{msl} = \left( 1 - \frac{2}{z_1 - z_2} \right) T_{2e_1} + \left( 1 - \frac{2}{z_2 - z_1} \right) T_{2e_2}. \end{aligned}$$

We also have \({{{{\widehat{D}}}}_0}^2 \sim (D^{msl} + 2)^3\), and \(D_1 D_2 \sim 3 D^{qm} + 16 \sim 3(D^{msl})^2 + 4\), where

$$\begin{aligned} D^{qm}&= \left( 1 - \frac{2}{z_1 - z_2} \right) \left( 1 - \frac{2}{z_1 - z_2+2} \right) (T_{4e_1}-1)\\&\quad + \left( 1 - \frac{2}{z_2 - z_1} \right) \left( 1 - \frac{2}{z_2 - z_1+2} \right) (T_{4e_2} - 1) \end{aligned}$$

is the operator for the quasiminuscule weight \(4e_1\).

We note that the operators \(D_1\) and \(D_2\) are not symmetric under the swap of the variables \(z_1, z_2\). The operator \(D^{msl}\) is symmetric and all three operators commute.