Abstract
The modular transformation behavior of theta series for indefinite quadratic forms is well understood in the case of elliptic modular forms due to Vignéras, who deduced that solving a differential equation of second order serves as a criterion for modularity. In this paper, we will give a generalization of this result to Siegel theta series.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In the course of his work on the Minkowski–Hasse principle for quadratic forms over the rationals, Siegel introduced a natural generalization of elliptic modular forms of higher genus n [14]. Among those functions, nowadays called Siegel modular forms, Siegel theta series play a similarly important role as do theta series do in the context of elliptic modular forms. In a recently published article by Dittmann et al. [5], a basis of the space of Siegel cusp forms of degree 6 and weight 14 is given by harmonic Siegel theta series. By considering one of these basis elements, the authors deduce that the Kodaira dimension of the Siegel modular variety \({\mathcal {A}}_6={\text {Sp}}_{12}({\mathbb {Z}})\setminus {\mathbb {H}}_6\) is non-negative.
In order to give more examples of Siegel theta series and make use of the connection to various topics–such as algebraic geometry and number theory—it is desirable to give a general framework for the description of holomorphic and non-holomorphic Siegel theta series analogous to what is already known for elliptic theta series owing to the work of Vignéras [16]. If theta series are built from functions that satisfy a certain second-order differential equation, the modularity of these series immediately follows. For the (generalized) error functions, which are employed in the recent discussions of theta series for indefinite quadratic forms, this criterion is used to derive the modular transformation behavior of the emerging theta series. Namely, these are the results by Zwegers [18] for quadratic forms of signature \((m-1,1)\), by Alexandrov et al. [1] for signature \((m-2,2)\) and for arbitrary signature by Nazaroglu [13] and Westerholt-Raum [17], which are brought together by Kudla [9] and Funke and Kudla [7]. Even before that, Kudla and Millson [10, 11] considered a certain class of Schwartz functions to define modular forms in terms of theta functions and obtain holomorphic modular forms valued in the set of cohomology classes.
In most of these examples the criterion given by Vignéras plays an important role in order to deduce modularity, so the question arises whether a similar result holds for more general types of theta series. Vignéras herself derives the result of [16] in a second paper by considering the Weil representation and mentions that the result is expected to hold for Hilbert and Siegel theta series as well, see [15].
In the following, we prove this for the latter case by describing Siegel theta series for indefinite quadratic forms and deriving a generalization of Vignéras’ result for generic genus n. We adopt an elementary approach similar to the one in [16], which has the advantage that we explicitly construct a basis of suitable functions. This construction also embeds the known results for positive definite quadratic forms, which is for instance described by Freitag [6]. In this case, these “suitable functions” are harmonic polynomials and one obtains holomorphic series. However, the Siegel theta series that are constructed in the present paper are in general non-holomorphic. In a sequel to this paper, we will investigate the special case where the quadratic form has signature \((m-1,1)\) and, by applying the result shown here, deduce the modularity of non-holomorphic Siegel theta series, which are related to holomorphic (non-modular) Siegel theta series.
We give a short overview on the main results. We use standard conventions concerning the notation, so \({\text {e}}(z):=\exp (2\pi iz)\) and multiplication is hierarchically higher than division, for example \(1/8\pi \) means \(1/(8\pi )\).
Definition 1.1
Throughout this paper, let \(A\in {{\mathbb {Z}}}^{{m}\times {m}}\) denote a non-degenerate symmetric matrix of signature (r, s).
Remark 1.2
Note that we do not generally assume that A is even. Also, in some sections we explicitly set \(s=0\) and thus employ properties of the then positive definite matrix A.
We construct modular forms on the Siegel upper half-space
in the form of Siegel theta series. We denote by \({\mathcal {S}}({\mathbb {R}}^{m\times n})\) the space of Schwartz functions on \({\mathbb {R}}^{m\times n}\) and then choose \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) such that
This ensures the absolute convergence of the theta series that we define in the following.
Definition 1.3
Let \(H,K\in {\mathbb {R}}^{m\times n}\) and let \(\lambda \in {\mathbb {Z}}\). The theta series with characteristics H and K associated with f and A is
Remark 1.4
We drop the parameters f and A in the index, when the transformation of \(\vartheta _{{H},{K}}\) leaves them invariant. In the following, it becomes clear that the choice of \(\lambda \) depends on f, so we do not include it as additional parameter in the definition.
For a positive definite matrix A, we consider polynomials \(P:{\mathbb {C}}^{m\times n}\longrightarrow {\mathbb {C}}\) that satisfy \(P(UN)=\det N^\alpha P(U)\) for all \(N\in {\mathbb {C}}^{n\times n}\) and a fixed \(\alpha \in {\mathbb {N}}_0\). These polynomials form a complex vector space, which we denote by \({\mathcal {P}}_{\alpha }^{m,n}\). For a modified polynomial
and when we take A to be even and set \(\lambda =\alpha \), the theta series \(\vartheta _{\mathrm {O},\mathrm {O},p,A}\) transforms like a Siegel modular form of weight \(m/2+\alpha \) on a congruence subgroup of \(\varGamma _n\) and with some character, where both depend on the level of A. If \(P\in {\mathcal {P}}_{\alpha }^{m,n}\) is annihilated by the Laplacian \({\text {tr}}{\Delta }_A\), we obtain the holomorphic theta series considered by Freitag [6].
When A denotes an indefinite quadratic form of signature (r, s), we write \(A=A^{+}+A^{-}\) with a positive semi-definite matrix \(A^{+}\) and a negative semi-definite matrix \(A^{-}\) and denote by \(M=A^{+}-A^{-}\) the positive definite majorant matrix of A (see Remark 2.3). We consider the function
assuming that \(P\in {\mathcal {P}}_{\alpha +\beta }^{m,n}\) factorizes as \(P(U)=P_\alpha (U^+)\cdot P_\beta (U^-)\) with \(P_\alpha \in {\mathcal {P}}_{\alpha }^{m,n},P_\beta \in {\mathcal {P}}_{\beta }^{m,n}\) and \(U=U^++U^-\), where \(U^+\) denotes the part of U that belongs to the subspace on which A is positive semi-definite, i. e. \({\text {tr}}\bigl ((U^+)^{\mathsf {T}}A U^+\bigr )={\text {tr}}\bigl (U^{\mathsf {T}}A^{+}U \bigr )\) and similarly \({\text {tr}}\bigl ((U^-)^{\mathsf {T}}A U^-\bigr )={\text {tr}}\bigl (U^{\mathsf {T}}A^{-}U \bigr )\). For this choice of g and considering an even matrix A and setting \(\lambda =\alpha -\beta -s\), the theta series \(\vartheta _{\mathrm {O},\mathrm {O},g,A}\) transforms like a non-holomorphic Siegel modular form of weight \(m/2+\lambda \) on a congruence subgroup of \(\varGamma _n\) and with some character, where both depend on the level of A.
These explicit constructions do not only give examples of Siegel modular forms, but by applying Vignéras’ result for genus \(n=1\), we show that we obtain a similar criterion as in [16] to determine whether a Siegel theta series transforms like a modular form:
Theorem 1.5
Let \(\lambda \in {\mathbb {Z}}\) and let \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) such that
and f is a solution of the \(n\times n\) system of partial differential equations
For \(H=K=\mathrm {O}\) and A even, the theta series \(\vartheta _{H,K,f,A}\) in Definition 1.3 transforms like a Siegel modular form of genus n and weight \(m/2+\lambda \), where the level and character depend on A.
Remark 1.6
In this paper, we determine the transformation behavior of \(\vartheta _{H,K,f,A}\) with respect to the transformations \(Z\mapsto Z+S\) for a symmetric matrix \(S\in {\mathbb {Z}}^{n\times n}\) (see Lemma 4.2) and \(Z\mapsto -Z^{-1}\) (see Proposition 4.10). The results hold for any \(H,K\in {\mathbb {R}}^{m\times n}\) and we do not generally assume that A is even. By setting further preconditions for H, K and A, one can then construct vector-valued Siegel modular forms of genus n and weight \(m/2+\lambda \) on the full Siegel modular group or scalar-valued modular forms on congruence subgroups, see also Remark 2.2. However, we will not explicitly elaborate on that here.
The outline of the paper is as follows: In Sect. 2, we briefly summarize the most important notions about Siegel modular forms that are relevant for this paper. In the next section, we examine the complex vector space formed by the solutions of the \(n\times n\) system of partial differential equations from Theorem 1.5. Under the additional assumption that a solution f must satisfy the growth condition \(f(U)\exp (-\pi {\text {tr}}(U^{\mathsf {T}}AU))\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\), we explicitly determine a basis (which is finite if A is positive or negative definite and infinite otherwise) of this vector space. In Sect. 4, we show that these basis elements can be used to construct theta series of genus n that transform like Siegel modular forms of weight \(m/2+\lambda \). In order to do so, we first construct non-holomorphic theta series for positive definite quadratic forms. With some modifications, this can be generalized to theta series associated with indefinite quadratic forms.
2 Notation and preliminaries
We fix notation and summarize standard results about Siegel modular forms and refer to Andrianov [2, p. 1–25] and Freitag [6] for further details. For convenience, we replicate some definitions of the last section. We also comment on results by Borcherds [4] and Vignéras [15] and point out differences to our set-up.
Denote the Siegel upper half-space by
We let \(Y^{1/2}\) denote the uniquely determined symmetric positive definite matrix that satisfies \(Y^{1/2}\cdot Y^{1/2}=Y\). The same holds for the square root of A, when A is a positive definite matrix.
We define modular forms on \({\mathbb {H}}_n\) for the full Siegel modular group
which operates on \({\mathbb {H}}_n\) by
The imaginary part Y of Z and the imaginary part \({\widetilde{Y}}\) of \(M\langle Z\rangle \) satisfy the relation
In particular, \({\widetilde{Y}}\) is positive definite and symmetric.
Definition 2.1
We call \(F:{\mathbb {H}}_n\longrightarrow {\mathbb {C}}\) a (classical) Siegel modular form of genus n and weight k if the following conditions hold:
-
(a)
The function F is holomorphic on \({\mathbb {H}}_n\),
-
(b)
For every \(M\in \varGamma _n\) we have \(F(M\langle Z\rangle )=\det (CZ+D)^{k}F(Z)\),
-
(c)
|F(Z)| is bounded on domains in \({\mathbb {H}}_n\) of the form \({\mathbb {H}}^{\varepsilon }_n:=\{X+iY\in {\mathbb {H}}_n\mid Y\ge \varepsilon \cdot I\}\) with \(\varepsilon >0\).
Note that the weight is not necessarily an integral number. In this context, we define—as usual—for \(z\in {\mathbb {C}}\) and any non-integer exponent r that \(z^r:=\exp (r\log z)\), where \(\log z=\log |z|+i\arg (z),-\pi <\arg (z)\le \pi \).
Due to the Koecher principle (cf. [6, p. 44f.]), which holds for \(n>1\), all functions satisfying (a) and (b) admit a Fourier expansion over positive semi-definite even symmetric matrices and are in particular bounded on \({\mathbb {H}}^{\varepsilon }_n\) for any \(\varepsilon >0\). So we do not need to impose an analogue of (c) as condition. If we consider non-holomorphic modular forms, the Koecher principle does not necessarily hold anymore. In our case, we build Siegel theta series by using Schwartz functions and obtain absolutely convergent series, so these functions also satisfy condition (c).
Remark 2.2
The full Siegel modular group \(\varGamma _n\) is generated by the matrices \( \bigl (\begin{array}{cc} I_n&{}S\\ \mathrm {O}&{}I_n \end{array}\bigr )\) with \(S=S^{\mathsf {T}}\) and \(\bigl (\begin{array}{cc} \mathrm {O}&{}-I_n\\ I_n&{}\mathrm {O} \end{array}\bigr )\) (cf. [6, p. 322-328]), so any function F with \(F(Z+S)=F(Z)\) for symmetric matrices \(S \in {{\mathbb {Z}}}^{{n}\times {n}}\) and \(F(-Z^{-1})=\det Z^k F(Z)\) satisfies condition (b). For the theta series with characteristics H, K that we construct here, we observe the following: Up to a factor depending on H, A and S, we can write \(\vartheta _{{H},{K}}(Z+S)\) as a theta series of the same form but with a slightly changed characteristic \(H,{\widetilde{K}}\), see Lemma 4.2. We can express \(\vartheta _{{H},{K}}(-Z^{-1})\) as a linear combination of theta series \(\vartheta _{{J+K},{-H}}(Z)\), where \(J\in A^{-1}{\mathbb {Z}}^{m\times n}{\text {mod}} {\mathbb {Z}}^{m\times n}\), see Proposition 4.10.
If A is an even unimodular matrix and \(H=K=\mathrm {O}\), the theta series transforms like a modular form on the full group \(\varGamma _n\), see Example 4.8 when A is positive definite and Example 4.11 when A is indefinite (in the last case we might obtain a character of \(\varGamma _n\) as an additional automorphic factor).
If H and K are rational matrices, we can take the series \(\vartheta _{{H},{K}}\) as entries of vector-valued functions, which then define modular forms on the full Siegel modular group. In another approach (see for example Andrianov and Maloletkin [3]), one could consider suitable congruence subgroups of finite index in \(\varGamma _n\).
In Sect. 3 as well as Sect. 4, we will consider a fixed decomposition of the non-degenerate matrix A of signature (r, s), so we give a precise description here.
Remark 2.3
Let \(\mathbf {v_{1}},\ldots ,\mathbf {v_{r}}\) denote the eigenvectors that correspond to the positive eigenvalues of A and \(\mathbf {v_{r+1}},\ldots ,\mathbf {v_{m}}\) the ones that correspond to the negative eigenvalues. We normalize these eigenvectors in a suitable way so that for \(S=(\mathbf {v_{1}},\ldots ,\mathbf {v_{m}})\in {{\mathbb {R}}}^{{m}\times {m}}\)
As \(\lbrace \mathbf {v_{1}},\ldots , \mathbf {v_{m}}\rbrace \) forms a basis of \({\mathbb {R}}^m\), we write any vector \(\mathbf {u}\in {\mathbb {R}}^m\) as \(\mathbf {u}=\sum _{i=1}^r \lambda _i \mathbf {v_{i}}+\sum _{i=r+1}^m \lambda _i \mathbf {v_{i}}\) and define \(\mathbf {u^+}:=\sum _{i=1}^r \lambda _i \mathbf {v_{i}}\) and \(\mathbf {u^-}:=\sum _{i=r+1}^m \lambda _i \mathbf {v_{i}}\).
So for the inverse of S, we have \(A=(S^{-1})^{\mathsf {T}}{\mathcal {I}}S^{-1}\). This enables us to write A as the sum of the positive semi-definite respectively negative semi-definite matrices
We also associate the positive definite matrix \(M:=(S^{-1})^{\mathsf {T}}S^{-1}=A^{+}-A^{-}\). If we write \(U\in {\mathbb {R}}^{m\times n}\) as \(U=U^++U^-\), where \(U^+:=(\mathbf {u^+_{1}},\ldots ,\mathbf {u^+_{n}})\) and \(U^-:=(\mathbf {u^-_{1}},\ldots ,\mathbf {u^-_{n}})\), it is straightforward to check that
As our construction of Siegel theta series in Sect. 4 is very similar to Borcherds’ set-up [4] for \(n=1\), we briefly recall his result and point out the main differences.
Remark 2.4
Borcherds considers a non-degenerate quadratic form Q with signature (r, s), an even lattice \(L\subset {\mathbb {R}}^m\) with the associated dual lattice \(L'\) and an isometry v mapping \(L\otimes {\mathbb {R}}\) to \({\mathbb {R}}^{r,s}\). Considering the inverse images \(v^+\) and \(v^-\) of \({\mathbb {R}}^{r,0}\) and \({\mathbb {R}}^{0,s}\) under v, one decomposes \(L\otimes {\mathbb {R}}\) in the orthogonal direct sum of a positive definite subspace \(v^+\) and a negative definite subspace \(v^-\). For the projection of \({\lambda } \in L\otimes {\mathbb {R}}\) into \(v^{\pm }\) one writes \({\lambda }_{v^{\pm }}\) and obtains the positive definite quadratic form \(Q_v({\lambda })=Q({\lambda }_{v^+})-Q({\lambda }_{v^-})\). As the decomposition into the subspaces \(v^+\) and \(v^-\) is not unique, Borcherds’ theta series include an additional parameter to indicate the choice of \(v^+\in G(M)\), where the Grassmannian G(M) denotes the set of positive definite r-dimensional subspaces of \(L\otimes {\mathbb {R}}\). For \(z \in {\mathbb {H}}_1,\mathbf {h},\mathbf {k}\in L\otimes {\mathbb {R}},{\gamma }\in L'/L\), \(\Delta \) the Laplacian on \({\mathbb {R}}^m\), and \(p:{\mathbb {R}}^m\longrightarrow {\mathbb {R}}\) a polynomial that is homogeneous of degree \(\alpha \) in the first r variables and homogeneous of degree \(\beta \) in the last s variables, he defines
and shows that this is a non-holomorphic modular form of weight \((r/2+\alpha ,s/2+\beta )\).
In the present paper, we fix the decomposition \(A=A^{+}+A^{-}\) and the majorant matrix \(M=A^{+}- A^{-}\) by taking the eigenvectors of A as a basis in \({\mathbb {R}}^m\). Then \(U\in {\mathbb {R}}^{m\times n}\) is projected onto \(U^+\) in the positive definite subspace and \(U^-\) in the negative definite subspace. However, choosing any other decomposition of A into a negative and a positive definite part leads to an analogous construction.
In Definition 1.3, we represented \(\vartheta _{{H},{K}}\) such that the analogy with Vignéras’ construction (see Remark 2.5) is visible. We can also write these theta series as
which resembles Borcherds’ construction. Note that we can multiply the series by \(\det Y^{s/2+\beta }\) to obtain the weight \(m/2+\lambda \) (where \(\lambda =\alpha -\beta -s\)) instead of \((r/2+\alpha ,s/2+\beta )\).
We conclude this section by reviewing Vignéras’ construction [16] and addressing essential differences.
Remark 2.5
Vignéras considers theta series of genus 1
where \(L\subset {\mathbb {R}}^m\) denotes a lattice, \(Q(\mathbf {u})=\frac{1}{2}\mathbf {u}^{\mathsf {T}}A \mathbf {u}\) a quadratic form of signature (r, s) and \(z=x+iy\) an element of the upper half-plane \({\mathbb {H}}_1\). The following two requirements are imposed on the function f: Set \({\widetilde{f}}(\mathbf {u})=f(\mathbf {u})\exp \bigl (-2\pi Q(\mathbf {u})\bigr )\). Then for any polynomial \(p:{\mathbb {R}}^{m}\longrightarrow {\mathbb {R}}\) with \(\deg (p)\le 2\) and any partial derivative \(\partial ^{\alpha }\) with \(|\alpha |\le 2\),
Furthermore, f satisfies the differential equation of second order
Then \(\vartheta _{{\mathbf {0}},{\mathbf {0}}}\) transforms like a modular form of weight \(m/2+\lambda \).
For higher genus \(n\in {\mathbb {N}}\), we introduce some notation to formulate an analogous growth condition. For \(p\in [1,\infty )\) let \({\mathcal {L}}^p({\mathbb {R}}^{m\times n})\) denote the Lebesgue space of functions \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {C}}\) for which
is finite. We use the usual multi-index notation on \({\mathbb {R}}^{m\times n}\), where \(\alpha \in {\mathbb {N}}_0^{m\times n}\) with \(|\alpha |=\sum _{i=1}^{m}\sum _{j=1}^{n} \alpha _{ij}\), so
For \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\), one sets \({\widetilde{f}} (U):=f(U)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}A U)\bigr )\) and – analogously to Vignéras—assumes that for any polynomial \(p:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) with \(\deg (p)\le 2\) and any partial derivative \(\partial ^{\alpha }\) with \(|\alpha |\le 2\),
This allows us to apply Vignéras’ result for theta series of genus 1 (as we make use of the fact that Hermite functions build an orthogonal basis of \({\mathcal {L}}^2\)-functions) and the Poisson summation formula.
However, for simplification, we replace assumption (2.2) by the more restrictive assumption that \({\widetilde{f}}\) is a Schwartz function.
3 A generalization of Vignéras’ differential equation
To derive an analogue of Vignéras’ result for Siegel modular forms of higher genus \(n\in {\mathbb {N}}\), we introduce matrix-valued operators generalizing E and \(\Delta _A\).
Definition 3.1
For \(U\in {\mathbb {R}}^{m\times n}\) let \(\partial /\partial U=\bigl (\partial /\partial U_{\mu \nu }\bigr )_{1\le \mu \le m,1\le \nu \le n}\). We define the generalized Euler operator
and the generalized Laplace operator associated with A
For the normalized Laplacian \({\Delta }_I\) we simply write \({\Delta }\). Further, we set
The \(n\times n\) system of partial differential equations
is a direct generalization of the set-up in [16]. In this section, we examine the complex vector space formed by the solutions f of (3.1) that additionally satisfy the growth condition \(f(U)\exp (-\pi {\text {tr}}(U^{\mathsf {T}}AU))\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\). We explicitly determine a basis (which is finite if A is positive or negative definite and infinite otherwise) of this vector space.
3.1 Functions with a homogeneity property
As mentioned in the introduction, we employ polynomials with a certain homogeneity property to construct Siegel theta series. In the following, we introduce the complex vector space of all functions with this homogeneity property. For a differentiable function f, we show in Proposition 3.4 that f is homogeneous of degree \(\alpha \) if and only if f solves the system of partial differential equations \(\mathbf{E }f=\alpha \cdot I\cdot f\). Further, we show in Lemma 3.5 that for a polynomial function p it is already sufficient that \(\mathbf{E }p=C\cdot p\) holds for some \(C\in {\mathbb {C}}^{n\times n}\) to deduce that p is a homogeneous function.
Definition 3.2
For \(\alpha \in {\mathbb {N}}_0\), \(m,n\in {\mathbb {N}}\), we define the complex vector space
For \(n=1\), this is the usual definition of a homogeneous function of non-negative degree. As a subspace, we consider all polynomials of this class, which is the space \({\mathcal {P}}_{\alpha }^{m,n}\) from the introduction.
Remark 3.3
The vector space \({\mathcal {P}}_{\alpha }^{m,n}\) is described by Maass [12]. He determines the structure of \({\mathcal {P}}_{\alpha }^{m,n}\), shows that it has finite dimension and even gives an explicit formula for the dimension. In the following, \({\mathcal {B}}_{\alpha }^{m,n}\) denotes a finite basis of \({\mathcal {P}}_{\alpha }^{m,n}\). We state some observations to show that we obtain non-trivial examples.
-
For \(m<n\), we have \({\mathcal {F}}_{\alpha }^{m, n}= {\mathbb {C}}\): We take \(U\in {\mathbb {R}}^{m\times n}\) such that \(f(U)\ne 0\). One can multiply elementary matrices from the right such that U is in reduced column echelon form. If U has less rows than columns, at least the last column is a zero column. Setting \(N={\text {diag}}(1,\ldots ,1,\lambda )\) with \(\lambda \notin \lbrace 0,1\rbrace \), leads to the identity \(f(U)=\lambda ^\alpha f(U)\), which is only satisfied for \(\alpha =0\). The orbit of the right action of invertible matrices on \(U\in {\mathbb {C}}^{m\times n}\) is dense and f is continuous, so f is a constant function.
-
Note that \(f\cdot g\in {\mathcal {F}}_{\alpha +\beta }^{m, n}\) for \(f\in {\mathcal {F}}_{\alpha }^{m, n},g\in {\mathcal {F}}_{\beta }^{m, n}\) and \(f+ g\in {\mathcal {F}}_{\alpha }^{m, n}\) for \(f,g\in {\mathcal {F}}_{\alpha }^{m, n}\).
-
For \(m\ge n\) let \({\widetilde{U}}\in {{\mathbb {C}}}^{{n}\times {n}}\) be a square submatrix of maximal size of \(U\in {\mathbb {C}}^{m\times n}\). Clearly, we have \(\det {\widetilde{U}}^{\alpha }\in {\mathcal {P}}_{\alpha }^{m,n}\). Due to this and by picking up on the previous point, we obtain all functions in \({\mathcal {P}}_{\alpha }^{m,n}\) by taking the product of \(\alpha \) (possibly different) \(n\times n\)–minors \(\det {\widetilde{U}}\) and linear combinations thereof.
Homogeneous functions that are also differentiable are characterized by the identity \(Ef=\alpha \cdot f\). We observe that this statement can be generalized.
Proposition 3.4
Let \(f: {\mathbb {C}}^{m\times n}\longrightarrow {\mathbb {C}}\) be a differentiable function. We have \(f\in {\mathcal {F}}_{\alpha }^{m, n}\) if and only if \(\mathbf{E }f=\alpha \cdot I\cdot f\).
Proof
For \(U\in {\mathbb {C}}^{m\times n}\) and \(N\in {\mathbb {C}}^{n\times n}\), the derivative of the entry \((UN)_{k\ell }=\sum \limits _{\nu =1}^n U_{k\nu }N_{\nu \ell }\) with \(1\le k\le m,\,1\le \ell \le n\) is
Therefore,
Hence, we obtain for the derivative of f(UN) with respect to N that
and by the definition of the generalized Euler operator \(\mathbf{E }\) with respect to U it is
The adjugate matrix \({\text {adj}}(N)\in {{\mathbb {C}}}^{{n}\times {n}}\) is defined as \(\bigl ({\text {adj}}(N)\bigr )_{ij}:=(-1)^{i+j} \det {{\widetilde{N}}}_{ji},\) where \({{\widetilde{N}}}_{ji}\) denotes the \((n-1)\times (n-1)\)-matrix obtained by deleting the j-th row and i-th column. Laplace expansion of the determinant gives
Hence, the derivate of the determinant of N is the transpose of the adjugate matrix:
For \(f\in {\mathcal {F}}_{\alpha }^{m, n}\) the identity \(f(UN)=\det N^\alpha f(U)\) holds for all \(N\in {{\mathbb {C}}}^{{n}\times {n}}\). From equations (3.2) and (3.3) it follows
since \({\text {adj}}(N) N=\det N\cdot I\). We set \(N=I\) and obtain the identity \(\mathbf{E }f=\alpha \cdot I\cdot f\).
To show the other implication, notice that \(f(UN)(\det N)^{-\alpha }\) is constant with respect to N if f satisfies \(\mathbf{E }f=\alpha \cdot I\cdot f\): Using the identities (3.2) and (3.3), we obtain
Thus, we have \(f(UN) \det N^{-\alpha }=C(U),\) where C is independent of N. For \(N=I\), this is f(U), and hence we conclude \(f(UN)=\det N^\alpha f(U)\). \(\square \)
Later we will only consider polynomial solutions. In this case, we can state the following lemma, which can be left aside for now, but will be used in the proof of Proposition 3.12.
Lemma 3.5
Let \(p:{\mathbb {C}}^{m\times n}\longrightarrow {\mathbb {C}}\) be a polynomial that solves the system of partial differential equations
If p is not the zero function, the matrix C has the form \(C=\alpha \cdot I\) for some \(\alpha \in {\mathbb {N}}_0\).
Proof
First, we examine the case \(m=n=2\) and write \(U=\left( \begin{array}{ll} a&{}b\\ c&{}d \end{array}\right) \) and
By assumption, p satisfies the \(2\times 2\)-system of partial differential equations
Considering the upper left equation, we have
thus \(\alpha +\gamma =C_{11}\). Analogously, we deduce by the bottom right equation that \(\beta +\delta =C_{22}\) holds. As p is a polynomial, \(C_{11}\) and \(C_{22}\) denote non-negative integers. We write \(C_{11}=k\) and \(C_{22}=\ell \) from now on. We have shown that p is homogeneous (in the original sense) of degree k in the variables of the first column a, c and homogeneous of degree \(\ell \) in the variables of the last column b, d. It is easy to see that \(C_{12}=C_{21}=0\) holds: By assumption, the polynomial p satisfies the upper right equation
As the left-hand side is a polynomial, homogeneous of degree \(k+1\) in a, c and of degree \(\ell -1\) in b, d, and the right-hand side is a multiple of p, i. e. homogeneous of degree k and \(\ell \), we deduce that \(C_{12}\) must equal zero. Analogously, we conclude by the bottom left equation of (3.5) that \(C_{21}=0\).
It remains to be shown that \(k=\ell \) holds. We write
where \(p_{\alpha ,\gamma }\) denote homogeneous polynomials in b, d of degree \(\ell \). Then equation (3.6) with \(C_{12}=0\) has the form
We obtain by comparison of the coefficients of \(a^\nu c^\mu ,\, 0\le \nu \le k+1,\,\mu =k+1-\nu \):
Thus, we recursively determine the structure of \(p_{\alpha ,\gamma }\) to be \(p_{\alpha ,\gamma }(b,d)=\sum _{r=0}^\alpha e_rb^{\ell -r}d^r\) with \(e_r\in {\mathbb {R}}\). In particular, we see that the exponent of d does not exceed \(\alpha \), i. e. \(\delta \le \alpha \).
We make use of the symmetric structure of the polynomial p and exchange a and c and also b and d in the equations above. Then we obtain \(\beta \le \gamma \). By interchanging a and d along with their exponents \(\alpha \) and \(\delta \) as well as b and c along with their exponents \(\beta \) and \(\gamma \) and using the bottom left equation of (3.5), we obtain \(\alpha \le \delta \) and \(\gamma \le \beta \). We have shown \(\alpha =\delta \) and \(\gamma =\beta \), and in particular, \(k=\ell \) holds.
For generic \(m,n\in {\mathbb {N}}\), we reduce the \(n\times n\)-system \(\mathbf{E }p=C\cdot p\) to the case \(m=n=2\). We write \(U=(\mathbf {u_{1}},\ldots ,\mathbf {u_{n}})\) with \(\mathbf {u_{i}}\in {\mathbb {C}}^m\) and choose \(N\in {\mathbb {C}}^{n\times n}\) such that the i-th column of U is substituted by \(a\mathbf {u_{i}}+c\mathbf {u_{j}}\) and the j-th column by \(b\mathbf {u_{i}}+d\mathbf {u_{j}}\), where we assume that \(i<j\), i. e. we have
A simple calculation yields
As p solves (3.4) by assumption, we have
For \(2\times 2\)-systems of this form we have shown above that \(C_{ii}=C_{jj}=\alpha \) for some \(\alpha \in {\mathbb {N}}_0\) and \(C_{ij}=C_{ji}=0\). As we can choose any \(i,j\in \lbrace 1,\ldots , n\rbrace \) with \(i< j\), we deduce the claim. \(\square \)
3.2 Description of theta series with modular transformation behavior by partial differential equations
In this section, we show the connection between the functions with the homogeneity property that was described in the last section and the functions that are employed in Sect. 4 to construct modular Siegel theta series. Moreover, we apply Vignéras’ result for \(n=1\) to explicitly give a basis for the vector space of solutions of (3.1) under the additional growth condition.
First, we state a lemma that holds for any symmetric non-degenerate matrix A of signature (r, s). Namely, we compute the commutator of the k-th power of the Laplacian \(({\text {tr}}{\Delta }_A)^k\) (we will drop the brackets and write \({\text {tr}}{\Delta }_A^k\) for simplicity) and the Euler operator.
Lemma 3.6
The commutator of \(\mathbf{E }_{ij}\,(1\le i \le n,\,1\le j\le n)\) and \({\text {tr}}{\Delta }_A^k\) is
Proof
We show the claim by induction on k. For \(k=1\) one calculates the commutator of \({\text {tr}}{\Delta }_A\) and \(\mathbf{E }_{ij}\). By definition we have
which we can write—denoting by \(\delta _{ij}\) the Kronecker delta – as
Since \(A^{-1}\) is symmetric, this is \(\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A + 2 \cdot ({\Delta }_A)_{ij}\). The operators \(({\Delta }_A)_{ij}\) and \({\text {tr}}{\Delta }_A\) commute; thus we deduce for \(k\mapsto k+1\)
We can now conclude that all solutions of (3.1) can be ascribed to functions that have the homogeneity property of degree \(\lambda \) by applying the previous lemma and Proposition 3.4.
Lemma 3.7
Let \(f,g:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) denote functions for which \(\exp (c_1{\text {tr}}{\Delta }_A )f\) and \(\exp ( c_2{\text {tr}}{\Delta }_A)g\) are well-defined for any \(c_1,c_2\in {\mathbb {R}}\) (we apply this result to polynomials f and g in the following, hence these conditions make sense). Moreover, we assume that f and g are related by \(f=\exp (-{\text {tr}}{\Delta }_A/8\pi )g\). Then f is a solution of (3.1) if and only if g satisfies \(\mathbf{E }g=\lambda \cdot I\cdot g\), i. e. \(g\in {\mathcal {F}}_{\lambda }^{m, n}\).
Proof
We set \(c:=-1/8\pi \) to shorten notation. For
we consider the entry (i, j) for \(1\le i \le n,1\le j\le n\) of the system of partial differential equations (3.1):
Due to Lemma 3.6, we have
and therefore obtain
If \(\mathbf{E }_{ij}g=\lambda \cdot \delta _{ij}\cdot g\) holds, the right-hand side equals \(\lambda \cdot \delta _{ij}\cdot f\). As we have \(g=\exp (-c{\text {tr}}{\Delta }_A)f\) (see Property 4.3), we deduce that \(\exp (c{\text {tr}}{\Delta }_A)(\mathbf{E }_{ij}g)=\lambda \cdot \delta _{ij}\cdot f\) implies \(\mathbf{E }_{ij}g=\lambda \cdot \delta _{ij}\cdot g\). By Proposition 3.4, this is equivalent to \(g\in {\mathcal {F}}_{\lambda }^{m, n}\). \(\square \)
In the next proposition, we consider (3.1) for positive definite matrices A, namely the system of partial differential equations
We determine a finite basis of all solutions of (3.7) by additionally imposing a certain growth condition. Together with Proposition 4.7, where it is shown that theta series associated with these functions transform like modular forms, we obtain Theorem 1.5 for positive definite matrices A. In the proof, we employ Vignéras’ result [16] and the fact that we can explicitly construct a finite basis \({\mathcal {B}}_{\alpha }^{m,n}\) of \({\mathcal {P}}_{\alpha }^{m,n}\) due to Maass’ result [12].
Proposition 3.8
Let \({\mathcal {B}}_{\alpha }^{m,n}\) denote a finite basis of \({\mathcal {P}}_{\alpha }^{m,n}\) and let \(A\in {\mathbb {Z}}^{m\times m}\) denote a positive definite symmetric matrix. Every solution \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) of (3.7) that additionally satisfies the growth condition \({\widetilde{f}}(U):=f(U)\exp \bigl ( -\pi {\text {tr}}(U^{\mathsf {T}}A U) \bigr )\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\) is a polynomial. Moreover, a finite basis of this space of solutions is given by
Proof
We give a short review of Vignéras’ reasoning and apply it to functions with matrix variables. We identify \({\mathbb {R}}^{m\times n}\) with \({\mathbb {R}}^{m n}\) by writing \({\mathbb {R}}^{m\times n}\ni U=(\mathbf {u_{1}},\ldots ,\mathbf {u_{n}}),\) \(\mathbf {u_{i}}\in {\mathbb {R}}^m,\) as column vector
If f satisfies the system of differential equations (3.7) for \(c=-1/8\pi \), it follows in particular that
holds. We have
which are the usual Euler operator on \({\mathbb {R}}^{m n}\) and the Laplacian associated with the positive definite \(mn\times mn\)- matrix that consists of blocks of \(m\times m\)-matrices that are zero except for n copies of A on the diagonal. We write U in a suitable basis such that we consider the quadratic form \((S^{-1})^{\mathsf {T}}A S^{-1}=I\) to express \({\widetilde{f}}\) in an orthogonal basis of Hermite functions \(H_{\mathbf {k}}\) in mn variables as
where the Hermite functions in several variables are defined in terms of Hermite functions in one dimension:
Since f is a solution of (3.8), a basis of all functions \({\widetilde{f}}\) is determined by the finite set of Hermite functions \(H_{\mathbf {k}}\) with \(|\mathbf {k}|=\sum _{\mu =1}^{m}\sum _{\nu =1}^{n} k_{\mu \nu }=\alpha n\) (this is Vignéras’ argument, see [16]), where we can rewrite \(H_{\mathbf {k}}(U)=p_{\mathbf {k}}(U)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}AU)\bigr )\) with the Hermite polynomial \(p_{\mathbf {k}}\). So \(f(U)={\widetilde{f}}(U)\exp \bigl (\pi {\text {tr}}(U^{\mathsf {T}}A U)\bigr )\) can be expanded in terms of finitely many orthogonal Hermite polynomials \(p_{\mathbf {k}}\) and thus is a polynomial itself.
Thus, \(g:=\exp \bigl ({\text {tr}}{\Delta }_A/8\pi \bigr )f\) is a polynomial that satisfies \(\mathbf{E }g=\alpha \cdot I\cdot g\) by Lemma 3.7. We can choose any basis \({\mathcal {B}}_{\alpha }^{m,n}\) of \({\mathcal {P}}_{\alpha }^{m,n}\) to describe these homogeneous polynomials. Hence, we also obtain a basis of the solutions of (3.7). As \({\mathcal {P}}_{\alpha }^{m,n}\) is a finite-dimensional vector space, the basis \({\mathcal {B}}_{\alpha }^{m,n}\) is finite. \(\square \)
Now we let A denote an indefinite matrix of signature (r, s) again. When we consider the associated system of partial differential equations (3.1), the solutions, which we describe in Proposition 3.12, can be traced to functions that are defined on \(U^\pm \) respectively, where \(U^+\) is the projection of U onto the subspace where A is positive definite and \(U^-\) the projection into the subspace where A is negative definite. So we first consider \({\mathcal {I}}=\bigl (\begin{array}{cc} I_r&{}\mathrm {O}\\ \mathrm {O}&{}-I_s\end{array}\bigr )\) instead of A and the corresponding system of partial differential equations
which can easily be split up into one part that depends on the first r rows of U and another part depending on the last s rows of U. We write \(U_r\) and \(U_s\) for these projections of U. Here, we have \(M=I\) and thus \({\Delta }_M={\Delta }\), and we show that a basis of all solutions is given by the functions
where P splits as \(P(U)=P_r(U_r)\cdot P_s(U_s)\) with \(P_r\in {\mathcal {B}}_{\alpha }^{m,n}\subset {\mathcal {P}}_{\alpha }^{m,n}\) and \(P_s\in {\mathcal {B}}_{\beta }^{m,n}\subset {\mathcal {P}}_{\beta }^{m,n}\) with \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) such that \(\alpha -\beta =\lambda +s\).
Lemma 3.9
If one applies the Laplacian \({\Delta }_{\mathcal {I}}\) and the Euler operator on a product of functions \(g,h:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\), then the following rules hold:
We omit the proof as the claim follows by a straightforward calculation. The part of (3.10) that depends on the subspace of \({\mathbb {R}}^{m\times n}\), on which the quadratic form is negative definite, satisfies a slightly different system of partial differential equations than the one given in Lemma 3.7, as an additional exponential factor occurs.
Lemma 3.10
Let \({\mathcal {B}}_{\beta }^{m,n}\) denote a basis of \({\mathcal {P}}_{\beta }^{m,n}\). We consider the system of partial differential equations
A finite basis of all solutions of (3.11) that additionally satisfy the growth condition \(f(U)\exp (\pi {\text {tr}}(U^{\mathsf {T}}U))\in {\mathcal {S}}({\mathbb {R}}^{m \times n})\) is given by the functions
Proof
We define \(g(U):=\exp (-2\pi {\text {tr}}(U^{\mathsf {T}}U))\) and \(h_P(U):=\exp (-{\text {tr}}{\Delta }/8\pi )(P(U))\). Both functions satisfy systems of partial differential equations similar to (3.11): we check that
and
hold. Hence, we have
Due to Proposition 3.8 for \(A=I\), the identity
holds if and only if \(P\in {\mathcal {P}}_{\beta }^{m,n}\). Using the multiplication rules from Lemma 3.9, and applying (3.12) and (3.13) in the calculation of \(\Delta f_{P}=\Delta (g \cdot h_{P})\), we obtain
where we use in the second step that \(\mathbf{E }^{\mathsf {T}}h_P=\mathbf{E }h_P\) holds, since \(h_P\) satisfies (3.13) and the Laplacian is symmetric.
Analogously, one can show that for any solution f of the system (3.11) of partial differential equations, the function \(h(U)=f(U)\exp \bigl ( 2\pi {\text {tr}}(U^{\mathsf {T}}U)\bigr )\) satisfies \({\mathcal {D}}_I h = \beta \cdot I \cdot h\). Since
by assumption, we can apply Proposition 3.8, which states that we can describe a finite basis for all functions h by \(h_P=\exp (-{\text {tr}}{\Delta }/8\pi )(P(U))\) with \(P\in {\mathcal {B}}_{\beta }^{m,n}\). Thus, the functions \(f_P\) form a finite basis of the solutions of (3.11) that satisfy the aforementioned growth condition. \(\square \)
In the next lemma, we show that the substitution of U by \(S^{-1}U\) leads to the desired system of partial differential equations that is associated with A.
Lemma 3.11
Let \(S\in {\mathbb {R}}^{m\times m}\) such that \(A=(S^{-1})^{\mathsf {T}}{\mathcal {I}}S^{-1}\) and consider the functions \(f,f[S^{-1}]:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\), where \(f[S^{-1}](U)=f(S^{-1}U)\). The function f satisfies (3.9) if and only if \(f[S^{-1}]\) satisfies (3.1).
Proof
Let \(i,j\in \{1,\ldots ,n\}\). It suffices to calculate
and
to deduce the claim. \(\square \)
Proposition 3.12
Let \({\mathcal {B}}_{\alpha }^{m,n}\) denote a basis of \({\mathcal {P}}_{\alpha }^{m,n}\) and let \(A\in {\mathbb {Z}}^{m\times m}\) denote a non-degenerate symmetric matrix of signature (r, s). As in Remark 2.3, we write A as the sum of a positive semi-definite matrix \(A^{+}\) and a negative semi-definite matrix \(A^{-}\) and define \(M:=A^{+}-A^{-}\). The functions
where \(P\in {\mathcal {P}}_{\alpha +\beta }^{m,n}\) is given as the product \(P(U)=P_r(U^+)\cdot P_s(U^-)\) with \(P_r\in {\mathcal {B}}_{\alpha }^{m,n}\subset {\mathcal {P}}_{\alpha }^{m,n}\) and \(P_s\in {\mathcal {B}}_{\beta }^{m,n}\subset {\mathcal {P}}_{\beta }^{m,n}\) for \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) such that \(\alpha -\beta =\lambda +s\), form a (possibly infinite) basis for the space of solutions \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) of (3.1) that additionally satisfy the growth condition
Proof
We consider the case \(A={\mathcal {I}}\). First we take \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) with \(\alpha -\beta =\lambda +s\) to be fixed and show that \(f=f_{\alpha ,\beta }\) solves (3.9). As the eigenvectors of A form the canonical basis of \({\mathbb {R}}^m\), the polynomial P splits as \(P(U)=P_r(U_r)\cdot P_s(U_s)\), where \(U_r\in {\mathbb {R}}^{r\times n}\) consists of the first r rows of U and \(U_s\in {\mathbb {R}}^{s\times n}\) of the last s rows of U. The exponential part of f has the form \(\exp \bigl (-2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr ),\) so we can write \(f:=f_r\cdot f_s\), where \(f_r\) denotes the part dependent on \(U_r\) and \(f_s\) the part dependent on \(U_s\). By Lemma 3.9, we have
The expression
simplifies to
since
These relations also show that we can write \({\Delta }_{\mathcal {I}}f_r={\Delta }_{I_r}f_r\) and \({\Delta }_{\mathcal {I}}f_s=-{\Delta }_{I_s}f_s\). Then we consider the system of partial differential equations depending on the first r rows of U, where \(f_r\) corresponds to the function f from Lemma 3.7. Independent from that, consider the part depending on the last s rows of U and apply Lemma 3.10 for \(f_s\). Putting these results together, we obtain
where \(\alpha -\beta -s=\lambda \).
To show that these functions form a basis of all solutions, we employ a similar argument as in the proof of Proposition 3.8. Again, we use Vignéras’ result to show that the solutions f of (3.9) have a certain form: We define the function \({\widetilde{f}}(U):=f(U)\exp \bigl ( -\pi {\text {tr}}(U^{\mathsf {T}}{\mathcal {I}}U) \bigr )\), which is a Schwartz function by assumption. Furthermore, identify \({\mathbb {R}}^{m\times n}\) with \({\mathbb {R}}^{m n}\) by writing \(U\in {\mathbb {R}}^{m\times n}\) as a column vector in \({\mathbb {R}}^{mn}\). As we have
which equals the normalized quadratic form of signature (rn, sn) on \({\mathbb {R}}^{mn}\), we write \({\widetilde{f}}\) as
As an \({\mathcal {L}}^2({\mathbb {R}}^{m n})\)-function, \({\widetilde{f}}\) is given in an orthogonal basis of Hermite functions \(H_{\mathbf {k}}\) in mn variables in the form of \({\widetilde{f}}=\sum _{\mathbf {k}\in {\mathbb {N}}_0^{mn}} c_{\mathbf {k}}H_{\mathbf {k}}\) with \(c_{\mathbf {k}}\in {\mathbb {R}}\). Since f is a solution of
we restrict the possible basis elements that appear in the expansion of \({\widetilde{f}}\):
Thus, as a consequence of Vignéras’ result for genus 1, any solution of (3.14) is given as a (possibly infinite) linear combination of functions
where the Hermite functions on \({\mathbb {R}}^{m n}\) (respectively \({\mathbb {R}}^{m\times n}\)) are given as product of one-dimensional Hermite functions:
with polynomials p, q, which are defined on \(U_r, U_s\) respectively. Rewriting \(f_{\mathbf {k}}\) as
each solution of (3.9) is given as a linear combination of functions of the form (3.15). The system of partial differential equations (3.9) is separable, i. e. can be broken into the part that depends on \(U_r\) and the part that depends on \(U_s\). Likewise, \(f_{\mathbf {k}}\) is given by a polynomial factor p depending on \(U_r\) and a factor of the form \(q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr )\), where q also denotes a polynomial. We can write \({\mathcal {D}}_{{\mathcal {I}}}={\mathcal {D}}^r+{\mathcal {D}}^s\) such that the differential operator \({\mathcal {D}}^r\) vanishes if we apply it to a function on \(U_s\), and has the form \({\mathcal {D}}_{I_r}\) when applying it to a function that is defined on \(U_r\). Analogously, \({\mathcal {D}}^s\) only depends on \(U_s\) and is of the form \({\mathcal {D}}_{-I_s}\) when applying it to functions on \(U_s\). So we have \({\mathcal {D}}_{{\mathcal {I}}} f_{\mathbf {k}}=\lambda \cdot I\cdot f_{\mathbf {k}}\) with
For \(f_{\mathbf {k}}(U)\ne 0\) we divide by \(f_{\mathbf {k}}\) and obtain for each entry of the system of partial differential equations a sum of two partial differential equations that depend on different variables and therefore have to admit constant solutions. It follows that a function \(f_{\mathbf {k}}\) solving (3.9) is given as the product described in (3.15) with the additional restriction that
We show that \(C_r=\alpha \cdot I\) for some \(\alpha \in {\mathbb {N}}_0\) holds and thus \(C_s=(\lambda -\alpha )\cdot I\). By applying the operator \(\exp \bigl ({\text {tr}}{\Delta }/8\pi \bigr )\) to \(p(U_r)\), we can deduce analogously to the proof of Lemma 3.7 that \(p(U_r)\) satisfies
if and only if the polynomial \(P_r(U_r):=\exp \bigl ({\text {tr}}{\Delta }/8\pi \bigr )\bigl (p(U_r)\bigr )\) satisfies \(\mathbf{E }P_r=C_r\cdot P_r\). We have shown in Lemma 3.5 that this system of partial differential equations admits polynomial solutions only if \(C_r=\alpha \cdot I\) with \(\alpha \in {\mathbb {N}}_0\).
Thus, every solution f of (3.9) is described by basis elements \(f_{\mathbf {k}}\) that consist of two factors that depend on different variables: p solves the system of partial differential equations in Proposition 3.8, where we have shown that these solutions can be described by a basis of homogeneous polynomials of degree \(\alpha \). Similarly, the function \(q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr )\) solves the system of equations in Lemma 3.10, where we also described a basis of solutions using homogeneous polynomials of degree \(\beta \). We conclude that all solutions of (3.9) are described by the functions \(f_{\alpha ,\beta }\) defined above with \(\alpha ,\beta \in {\mathbb {N}}_0\) such that \(\alpha -\beta =\lambda +s\). Thus, the basis consists of infinitely many elements if \(r,s>0\) (i. e. when A is indefinite) and finitely many otherwise (i. e. when A is positive or negative definite).
We substitute \(U\mapsto S^{-1}U\) and apply Lemma 3.11 to obtain the result for the system of partial differential equations (3.1). Note that for every basis element \(f_{\alpha ,\beta }\) the polynomial P splits as \(P(U)=P_r(U^+)\cdot P_s(U^-)\) by assumption. \(\square \)
4 Construction of theta series with modular transformation behavior
In this section, we construct Siegel theta series, which transform like modular forms of weight \(m/2+\lambda \), arising from the functions that we considered in the last section as solutions of \({\mathcal {D}}_A f=\lambda \cdot I \cdot f\). We explicitly determine the transformation behavior of the theta series with respect to \(Z\mapsto Z+S\) (for a symmetric matrix \(S\in {{\mathbb {Z}}}^{{n}\times {n}}\)) and \(Z\mapsto -Z^{-1}\). To state the next lemma, in which we describe the transformation behavior of \(\vartheta _{{H},{K}}\) with respect to the first-mentioned transformation, we introduce the following notation for matrices:
Definition 4.1
-
(a)
For \(M\in {{\mathbb {Z}}}^{{\mu }\times {\mu }}\) we define \(M_0\in {{\mathbb {Z}}}^{{\mu }\times {\mu }}\) by \((M_0)_{ij}=M_{ii}\) for \(i=j\) and zero otherwise.
-
(b)
We write \(\mathrm {1}_{\mu \nu }\) for a matrix with \(\mu \) rows and \(\nu \) columns, whose entries are all equal to 1.
Lemma 4.2
Let \(S\in {{\mathbb {Z}}}^{{n}\times {n}}\) denote a symmetric matrix. With respect to \(Z\mapsto Z+S\), the theta series from Definition 1.3 transforms as follows:
with
Proof
Write \(U=H+R\) with \(R\in {\mathbb {Z}}^{m\times n}\) such that
It is straightforward to see that
As A and S both denote symmetric matrices and \(x^2\equiv x\ ({\text {mod}} 2)\) for any \(x\in {\mathbb {Z}}\), we have
To rewrite the expression on the right-hand side in terms of matrices, we introduce the matrix \(\mathrm {1}_{nm}\in {\mathbb {Z}}^{n\times m}\) that only contains 1’s as entries and obtain
\(\square \)
In the following two sections, we determine how the theta series behaves under \(Z\mapsto -Z^{-1}\). To put it briefly, we calculate the Fourier transform of the summand and then apply the Poisson summation formula. We define the Fourier transform associated with the matrix A:
Definition 4.3
Let \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {C}}\) such that \(f\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\). Then \({\widehat{f}}\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\) denotes the Fourier transform
with dU the Euclidean volume element.
Note that we do not take the standard definition of the Fourier transform as a unitary operator here, but rather we obtain the additional normalizing factor \(|\det A|^{-n/2}\). Consequently, the Poisson summation formula has the form
In Sect. 4.1, we consider theta series associated with positive definite quadratic forms and give a set of examples of non-holomorphic Siegel modular forms. We obtain those results by generalizing the set-up of Freitag [6]. In Sect. 4.2, we see that a similar construction also yields theta series associated with indefinite quadratic forms that transform like Siegel modular forms.
4.1 Theta series for positive definite quadratic forms
In this section, \(p:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) is a polynomial and \(A\in {{\mathbb {Z}}}^{{m}\times {m}}\) is a symmetric positive definite matrix. Following Freitag [6], we first examine the series
We consider the operator
and define
Since we are assuming that p is a polynomial, this sum is finite.
Lemma 4.4
The following rules hold for \(a,b,c\in {\mathbb {C}}\) and \(M\in {{\mathbb {C}}}^{{m}\times {m}},N\in {{\mathbb {C}}}^{{n}\times {n}}\):
Proof
We derive Property (4.3) by considering the Cauchy product for the absolutely convergent series \(\sum _{k=0}^\infty \frac{1}{k !}(a{\text {tr}}{\Delta }_A)^k\) and \(\sum _{k=0}^\infty \frac{1}{k !}(b{\text {tr}}{\Delta }_A)^k.\) The identity (4.4) follows immediately from (4.5), when we set \(N:=a\cdot I \in {{\mathbb {C}}}^{{n}\times {n}}\). To show (4.5) we consider p(UN) and apply the Laplacian. We have
since
With the same argument we obtain
and therefore
Rewriting the Laplacian in the sum then gives (4.5). Analogously we obtain (4.6). \(\square \)
We calculate the Fourier transform of the summands in the series (4.2). To shorten the calculation, we apply the following result by Freitag [6, p. 158f.], who considers Gauss transforms: we have
Note that Freitag uses the normalized Laplace operator \({\text {tr}}{\Delta }={\text {tr}}{\Delta }_I\). In the next lemma, we see that for arbitrary polynomials p, the functions \(p(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr )\) are not necessarily eigenfunctions with regard to the Fourier transform:
Lemma 4.5
Let \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {C}},f(U):=p(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr ).\) The Fourier transform of f is
Proof
We rewrite Freitag’s result (4.7) to obtain a form that is suitable for the calculation of the Fourier transform: We substitute U by \(U+iV\), and as we examine a holomorphic integrand in several complex variables, we apply the global residue theorem (i. e. instead of integrating over U one can integrate over \(U+iV\) without changing the integral) and obtain
To determine
we set \(Z=iY\) and substitute U by \(A^{-1/2}UY^{-1/2}\) (A and Y are positive definite symmetric matrices, so the same holds for the inverses and uniquely determined square roots):
This is (4.8) evaluated at \(A^{1/2}VY^{-1/2}\) with a slightly changed argument in the polynomial p. We apply (4.5) and (4.6) and use that Y is symmetric to write \({\widehat{f}}\) as
As the integrand is a holomorphic function, we resubstitute \(Y=-iZ\) (for the inverse we have \(Y^{-1}=iZ^{-1}\)) and deduce the claim by analytic continuation. \(\square \)
In order to obtain an eigenfunction under the Fourier transformation, Freitag [6] chooses p to be a harmonic polynomial, i. e. \(({\text {tr}}{\Delta }) p=0\) and \(p(UN)=\det N^{\alpha } p(U)\) holds for all \(N\in {{\mathbb {C}}}^{{n}\times {n}}\). We consider the more general class of polynomials
We described the vector space \(P\in {\mathcal {P}}_{\alpha }^{m,n}\) in Section 3.1, where we have also seen that the functions \(p(U):=\exp (-{\text {tr}}({\Delta }_A)/8\pi )\bigl (P(U)\bigr )\) with \(P\in {\mathcal {P}}_{\alpha }^{m,n}\) form a basis for the vector space of the solutions of \({\mathcal {D}}_A f=\alpha \cdot I \cdot f\). The slightly modified functions in (4.9) depend on the imaginary part Y of Z, which means that we lose holomorphicity in the construction of the theta series. However, for harmonic polynomials P, we obtain the holomorphic theta series considered by Freitag. Note that this is basically a generalization of Borcherds’ construction for \(n=1\) in [4], see Remark 2.4 for a more detailed explanation.
Lemma 4.6
Let \(p_Z\) denote a polynomial from (4.9) and define
The Fourier transform of \(f_Z\) is
Proof
We apply Lemma 4.5 and then use the linearity of the trace and Property (4.3):
If \({\widetilde{Y}}\) denotes the imaginary part of \(-Z^{-1}\), the identity \({\widetilde{Y}}={\overline{Z}}^{-1} Y Z^{-1}\) holds by (2.1). Hence,
The matrix Z is symmetric and therefore also its inverse \(Z^{-1}\), which means that we can rewrite (4.10) as follows:
Using Property (4.5) and the homogeneity of \(P\in {\mathcal {P}}_{\alpha }^{m,n}\), we conclude that the Fourier transform of \(f_Z\) has the form
Separating constant factors and factors depending on the determinants of A and Z, we deduce the claim. \(\square \)
This construction yields theta series that transform like Siegel modular forms:
Proposition 4.7
Let \(A\in {\mathbb {Z}}^{m\times m}\) denote a positive definite symmetric matrix and p the polynomial defined as \(p(U)=\exp \bigl (-{\text {tr}}{\Delta }_A/8\pi \bigr )\bigl (P(U)\bigr )\) with \(P\in {\mathcal {P}}_{\alpha }^{m,n}\). For the corresponding theta series \(\vartheta _{{H},{K}}\) given in Definition 1.3 we have
Proof
We recall the definition of \(\vartheta _{{H},{K}}\), which is
We use Property (4.5) and the homogeneity property of P to rewrite
and analogously \(p_{-Z^{-1}}(U)=\det {\widetilde{Y}}^{-\alpha /2}p(U{\widetilde{Y}}^{1/2})\). That means the theta series has the form
By Lemma 4.6, the Fourier transform of the summand equals
The summands in the theta series are Schwartz functions as A denotes a positive definite quadratic form. Hence, we apply the Poisson summation formula (4.1), and obtain
which completes the proof. \(\square \)
Example 4.8
For \(m\equiv 0\ ({\text {mod}} 8)\), we choose an even unimodular matrix \(A\in {\mathbb {Z}}^{m\times m}\), which means in particular that \(\det A=1\) and \(A^{-1}\in {\mathbb {Z}}^{m\times m}\). Considering the theta series
we have \(\vartheta _{{\mathrm {O}},{\mathrm {O}}}(Z+S)=\vartheta _{{\mathrm {O}},{\mathrm {O}}}(Z)\) for any symmetric matrix \(S \in {{\mathbb {Z}}}^{{n}\times {n}}\) by Lemma 4.2 and \(\vartheta _{{\mathrm {O}},{\mathrm {O}}}(-Z^{-1})=\det Z^{m/2+\alpha }\vartheta _{{\mathrm {O}},{\mathrm {O}}}(Z)\) for a polynomial p as chosen in Proposition 4.7. Thus, \(\vartheta _{{\mathrm {O}},{\mathrm {O}}}\) is a non-holomorphic Siegel modular form of weight \(m/2+\alpha \) on the full Siegel modular group \(\varGamma _n\).
4.2 Theta series for indefinite quadratic forms
In this section, we consider theta series associated with non-degenerate symmetric matrices \(A\in {{\mathbb {Z}}}^{{m}\times {m}}\) with signature (r, s), where \(s\ge 0\). As described in Remark 2.3, we decompose \(A=A^{+}+A^{-}\) by employing the matrix of normalized eigenvectors \(S\in {\mathbb {R}}^{m\times m}\) so that we obtain the associated majorant matrix \(M=A^{+}-A^{-}\) and the projections \(U^{\pm }\) of U into the positive and negative subspaces of \({\mathbb {R}}^{m\times n}\) respectively. We replace the polynomials p that were defined in Proposition 4.7 by functions of the form
where \(P\in {\mathcal {P}}_{\alpha +\beta }^{m,n}\) is given as the product \(P(U)=P_{\alpha }(U^+)\cdot P_{\beta }(U^-)\) with \(P_{\alpha }\in {\mathcal {P}}_{\alpha }^{m,n}\) and \(P_{\beta }\in {\mathcal {P}}_{\beta }^{m,n}\). For \(\alpha -\beta =\lambda +s\), we know from Sect. 3.2 that these functions are solutions of \({\mathcal {D}}_A f=\lambda \cdot I\cdot f\). Of course, we can also replace g by a linear combination of functions of this type, under the assumption that \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) such that \(\alpha -\beta =\lambda +s\), to construct modular Siegel theta series. However, it is sufficient for the proof of Theorem 1.5 and simplifies the following calculations just to consider g as defined above, since these functions in particular include the basis elements of the vector space of solutions of \({\mathcal {D}}_A f=\lambda \cdot I\cdot f\).
In analogy with the last section, we define
and
For \(s=0\), we get back the functions from Lemma 4.6, so we use the same notation.
Lemma 4.9
The Fourier transform of \(f_Z(U)=g_Z(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr )\) is
Proof
We change the basis of \({\mathbb {R}}^m\) by the substitution of \(U\mapsto SU\) to obtain a part that depends on the first r rows of U (again, we denote this part of the matrix by \(U_r\)) and one part that depends on the last s rows of U (analogously, we denote this part by \(U_s\)):
We can split up the integral, as the polynomial P factors as a polynomial dependent on \(U_r\) and \(U_s\) respectively. We now apply the results for positive definite quadratic forms. By Lemma 4.6, we obtain
We treat the part that depends on the negative definite subspace like an expression that is associated with a positive definite quadratic form given by \(I_s\) and consider \(-{\overline{Z}}\in {\mathbb {H}}_n\) as variable in the Siegel upper half-space. Also note that by (2.1) we have
In particular, \({\text {Im}}\bigl ({\overline{Z}}^{-1}\bigr )={\text {Im}}(-Z^{-1})={\widetilde{Y}}\) and thus we have
where we evaluate the Fourier transform for \(-V_s\), and use Property (4.5) and the identity \(P_{\beta }(-V_s)=(-1)^{\beta s} P_{\beta }(V_s)\) to rewrite the expression. We now consider the product of (4.12) and (4.14) (we make use of (4.13) again to rewrite the exponential factor) and obtain:
We evaluate this integral at \(S^{-1}V\) to complete the proof. Without loss of generality, we can assume that \(\det S>0\) and therefore write \(\det S^n\) as \(|\det A|^{n/2}\). \(\square \)
Thus, we can state a more general version of Proposition 4.7 for Siegel theta series for indefinite quadratic forms.
Proposition 4.10
Let \(\lambda =\alpha -\beta -s\) and let \(g:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) define a function from (4.11). The theta series of the form
transforms as follows:
Proof
We use the same approach as in the proof of Proposition 4.7. By Property (4.5), we have \(g_{-Z^{-1}}(U)=\det {\widetilde{Y}}^{-(\alpha +\beta )/2}g(U{\widetilde{Y}}^{1/2})\), and thus we rewrite the theta series as
By Lemma 4.9, the Fourier transform of the summand equals
By (4.13), we have \({\widetilde{Y}}={\overline{Z}}^{-1}YZ^{-1}\) and thus rewrite
As \(\det Y^{s/2+\beta } g_Z(U)=\det Y^{-\lambda /2}g(UY^{1/2})\) by Property (4.5), we have
Again, we write
which completes the proof. \(\square \)
Example 4.11
We obtain examples of non-holomorphic Siegel modular forms on the full Siegel modular group if \(H=K=\mathrm {O}\) and A is an even unimodular matrix and additionally \(i^{mn/2}(-1)^{(s/2+\beta )n+\beta s}=1\) holds. Note that an even symmetric unimodular matrix of indefinite signature (r, s) only exists when \(r-s\equiv 0\ ({\text {mod}} 8)\) and is isomorphic to \(H_2^k\oplus (\pm E_8)^{\ell }\) with \(k=\min \{r,s\}\) and \(\ell =|r-s|/8\), where \(H_2=\left( \begin{array}{ll} 0&{}1\\ 1&{}0 \end{array}\right) \) and \(E_8\) represents the equivalence class of all even unimodular positive definite matrices of rank 8 (note that we take \(E_8\) if \(r>s\) and \(-E_8\) if \(r<s\)), see for example Husemoller and Milnor [8, p. 24-26] for more details.
4.3 Proof of Theorem 1.5
In Sect. 3, we introduced the \(n\times n\) system of partial differential equations \({\mathcal {D}}_A f=\lambda \cdot I\cdot f\) and determined a basis for all the solutions f that additionally satisfy the growth condition \(f(U)\exp (-\pi {\text {tr}}(U^{\mathsf {T}}AU))\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\) (see Proposition 3.12). In this section, we have determined the modular transformation behavior of the associated Siegel theta series \(\vartheta _{H,K,f,A}\) by explicitly calculating the transformation formulas for the generators \(Z\mapsto Z+S\) (see Lemma 4.2) and \(Z\mapsto -Z^{-1}\) (see Proposition 4.10) of the Siegel modular group. For an even matrix A and \(\lambda =\alpha -\beta -s\), the theta series \(\vartheta _{\mathrm {O},\mathrm {O},f,A}\) transforms like a Siegel modular form of genus n and weight \(m/2+\lambda \) on some congruence subgroup of \(\varGamma _n\). This proves Theorem 1.5.
Change history
09 October 2021
The original article has been revised to add the Open Access Funding note.
References
Alexandrov, S., Banerjee, S., Manschot, J., Pioline, B.: Indefinite theta series and generalized error functions. Sel. Math. New Ser. 24(5), 3927–3972 (2018)
Andrianov, A.N.: Introduction to Siegel Modular Forms and Dirichlet Series, Universitext. Springer Science+Business Media LLC, New York, NY (2009)
Andrianov, A.N., Maloletkin, G.N.: Behavior of theta series of degree \(n\) under modular substitutions. Math. USSR-Izv. 9(2), 227–241 (1975)
Borcherds, R.E.: Automorphic forms with singularities on Grassmannians. Invent. Math. 132(3), 491–562 (1998)
Dittmann, M., Salvati Manni, R., Scheithauer, N.R.: Harmonic theta series and the Kodaira dimension of \({\cal{A}}_6\). Algebra Number Theory 15(1), 271–285 (2021)
Freitag, E.: Siegelsche Modulfunktionen, Grundlehren der mathematischen Wissenschaften, vol. 254. Springer, Berlin (1983)
Funke, J., Kudla, S.S.: On some incomplete theta integrals. Compos. Math. 155(9), 1711–1746 (2019)
Husemoller, D., Milnor, J.: Symmetric bilinear forms, Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 73. Springer, Berlin (1973)
Kudla, S.S.: Theta integrals and generalized error functions. Manuscr. Math. 155(3), 303–333 (2018)
Kudla, S.S., Millson, J.J.: The theta correspondence and harmonic forms. I. Math. Ann. 274(3), 353–378 (1986)
Kudla, S.S., Millson, J.J.: The theta correspondence and harmonic forms. II. Math. Ann. 277(2), 267–314 (1987)
Maass, H.: Zur Theorie der harmonischen Formen. Math. Ann. 137(2), 142–149 (1959)
Nazaroglu, C.: \(r\)-tuple error functions and indefinite theta series of higher-depth. Commun. Number Theory Phys. 12(3), 581–608 (2018)
Siegel, C.L.: Über die analytische Theorie der quadratischen Formen. Ann. Math. 36(3), 527–606 (1935)
Vignéras, M.-F.: Séries thêta des formes quadratiques indéfinies, Séminaire Delange-Pisot-Poitou. Théorie des nombres, 17(1), (1975-1976)
Vignéras, M.-F.: Séries thêta des formes quadratiques indéfinies, Serre J.-P., Zagier D.B. (eds) Modular Functions of One Variable VI. Lecture Notes in Mathematics, vol. 627, pp. 227–239. Springer, Berlin, Heidelberg (1977)
Westerholt-Raum, M.: Indefinite theta series on cones, preprint, arXiv: 1608.08874,
Zwegers, S.P.: Mock theta functions: Ph.D. thesis, Universiteit Utrecht (2002)
Acknowledgements
The author is grateful to Sander Zwegers for suggesting the topic and lots of helpful advice. Besides, the author would like to kindly thank Jens Funke for useful remarks during the DMV Jahrestagung 2019, Markus Schwagenscheidt for pointing out how the system of differential equations can be reduced in order to apply Vignéras’ result, Ben Wright for carefully checking the manuscript for linguistic deficiencies, and the reviewers who contributed many insightful comments that improved the exposition of this paper significantly.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Roehrig, C. Siegel theta series for indefinite quadratic forms. Res. number theory 7, 45 (2021). https://doi.org/10.1007/s40993-021-00272-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40993-021-00272-y