1 Introduction

In the course of his work on the Minkowski–Hasse principle for quadratic forms over the rationals, Siegel introduced a natural generalization of elliptic modular forms of higher genus n [14]. Among those functions, nowadays called Siegel modular forms, Siegel theta series play a similarly important role as do theta series do in the context of elliptic modular forms. In a recently published article by Dittmann et al. [5], a basis of the space of Siegel cusp forms of degree 6 and weight 14 is given by harmonic Siegel theta series. By considering one of these basis elements, the authors deduce that the Kodaira dimension of the Siegel modular variety \({\mathcal {A}}_6={\text {Sp}}_{12}({\mathbb {Z}})\setminus {\mathbb {H}}_6\) is non-negative.

In order to give more examples of Siegel theta series and make use of the connection to various topics–such as algebraic geometry and number theory—it is desirable to give a general framework for the description of holomorphic and non-holomorphic Siegel theta series analogous to what is already known for elliptic theta series owing to the work of Vignéras [16]. If theta series are built from functions that satisfy a certain second-order differential equation, the modularity of these series immediately follows. For the (generalized) error functions, which are employed in the recent discussions of theta series for indefinite quadratic forms, this criterion is used to derive the modular transformation behavior of the emerging theta series. Namely, these are the results by Zwegers [18] for quadratic forms of signature \((m-1,1)\), by Alexandrov et al. [1] for signature \((m-2,2)\) and for arbitrary signature by Nazaroglu [13] and Westerholt-Raum [17], which are brought together by Kudla [9] and Funke and Kudla [7]. Even before that, Kudla and Millson [10, 11] considered a certain class of Schwartz functions to define modular forms in terms of theta functions and obtain holomorphic modular forms valued in the set of cohomology classes.

In most of these examples the criterion given by Vignéras plays an important role in order to deduce modularity, so the question arises whether a similar result holds for more general types of theta series. Vignéras herself derives the result of [16] in a second paper by considering the Weil representation and mentions that the result is expected to hold for Hilbert and Siegel theta series as well, see [15].

In the following, we prove this for the latter case by describing Siegel theta series for indefinite quadratic forms and deriving a generalization of Vignéras’ result for generic genus n. We adopt an elementary approach similar to the one in [16], which has the advantage that we explicitly construct a basis of suitable functions. This construction also embeds the known results for positive definite quadratic forms, which is for instance described by Freitag [6]. In this case, these “suitable functions” are harmonic polynomials and one obtains holomorphic series. However, the Siegel theta series that are constructed in the present paper are in general non-holomorphic. In a sequel to this paper, we will investigate the special case where the quadratic form has signature \((m-1,1)\) and, by applying the result shown here, deduce the modularity of non-holomorphic Siegel theta series, which are related to holomorphic (non-modular) Siegel theta series.

We give a short overview on the main results. We use standard conventions concerning the notation, so \({\text {e}}(z):=\exp (2\pi iz)\) and multiplication is hierarchically higher than division, for example \(1/8\pi \) means \(1/(8\pi )\).

Definition 1.1

Throughout this paper, let \(A\in {{\mathbb {Z}}}^{{m}\times {m}}\) denote a non-degenerate symmetric matrix of signature (rs).

Remark 1.2

Note that we do not generally assume that A is even. Also, in some sections we explicitly set \(s=0\) and thus employ properties of the then positive definite matrix A.

We construct modular forms on the Siegel upper half-space

$$\begin{aligned} {\mathbb {H}}_n :=\lbrace Z=X+iY \mid X,Y \in {{\mathbb {R}}}^{{n}\times {n}}\text { symmetric}, Y \text { positive definite}\rbrace \end{aligned}$$

in the form of Siegel theta series. We denote by \({\mathcal {S}}({\mathbb {R}}^{m\times n})\) the space of Schwartz functions on \({\mathbb {R}}^{m\times n}\) and then choose \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) such that

$$\begin{aligned} f(U)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}A U)\bigr )\in {\mathcal {S}}({\mathbb {R}}^{m\times n}). \end{aligned}$$

This ensures the absolute convergence of the theta series that we define in the following.

Definition 1.3

Let \(H,K\in {\mathbb {R}}^{m\times n}\) and let \(\lambda \in {\mathbb {Z}}\). The theta series with characteristics H and K associated with f and A is

$$\begin{aligned} \vartheta _{{H},{K}}(Z)= & {} \vartheta _{H,K,f,A}(Z)\\:= & {} \det Y^{-\lambda /2}\sum \limits _{U\in H+{\mathbb {Z}}^{m\times n}}f(UY^{1/2}){\text {e}}\bigl ( {\text {tr}}(U^{\mathsf {T}}AUZ)/2+{\text {tr}}(K^{\mathsf {T}}AU)\bigr ). \end{aligned}$$

Remark 1.4

We drop the parameters f and A in the index, when the transformation of \(\vartheta _{{H},{K}}\) leaves them invariant. In the following, it becomes clear that the choice of \(\lambda \) depends on f, so we do not include it as additional parameter in the definition.

For a positive definite matrix A, we consider polynomials \(P:{\mathbb {C}}^{m\times n}\longrightarrow {\mathbb {C}}\) that satisfy \(P(UN)=\det N^\alpha P(U)\) for all \(N\in {\mathbb {C}}^{n\times n}\) and a fixed \(\alpha \in {\mathbb {N}}_0\). These polynomials form a complex vector space, which we denote by \({\mathcal {P}}_{\alpha }^{m,n}\). For a modified polynomial

$$\begin{aligned} p(U)=\exp \bigl (-{\text {tr}}{\Delta }_A/8\pi \bigr )\bigl (P(U)\bigr )\quad \text {where}\quad {\Delta }_A:= \Bigl (\frac{\partial }{\partial U}\Bigr )^{\mathsf {T}}A^{-1}\frac{\partial }{\partial U}, \end{aligned}$$

and when we take A to be even and set \(\lambda =\alpha \), the theta series \(\vartheta _{\mathrm {O},\mathrm {O},p,A}\) transforms like a Siegel modular form of weight \(m/2+\alpha \) on a congruence subgroup of \(\varGamma _n\) and with some character, where both depend on the level of A. If \(P\in {\mathcal {P}}_{\alpha }^{m,n}\) is annihilated by the Laplacian \({\text {tr}}{\Delta }_A\), we obtain the holomorphic theta series considered by Freitag [6].

When A denotes an indefinite quadratic form of signature (rs), we write \(A=A^{+}+A^{-}\) with a positive semi-definite matrix \(A^{+}\) and a negative semi-definite matrix \(A^{-}\) and denote by \(M=A^{+}-A^{-}\) the positive definite majorant matrix of A (see Remark 2.3). We consider the function

$$\begin{aligned} g(U)=\exp \bigl ( -{\text {tr}}{\Delta }_{M}/8\pi \bigr )\bigl (P(U)\bigr )\exp \bigl (2\pi {\text {tr}}(U^{\mathsf {T}}A^{-}U )\bigr ), \end{aligned}$$

assuming that \(P\in {\mathcal {P}}_{\alpha +\beta }^{m,n}\) factorizes as \(P(U)=P_\alpha (U^+)\cdot P_\beta (U^-)\) with \(P_\alpha \in {\mathcal {P}}_{\alpha }^{m,n},P_\beta \in {\mathcal {P}}_{\beta }^{m,n}\) and \(U=U^++U^-\), where \(U^+\) denotes the part of U that belongs to the subspace on which A is positive semi-definite, i.  e. \({\text {tr}}\bigl ((U^+)^{\mathsf {T}}A U^+\bigr )={\text {tr}}\bigl (U^{\mathsf {T}}A^{+}U \bigr )\) and similarly \({\text {tr}}\bigl ((U^-)^{\mathsf {T}}A U^-\bigr )={\text {tr}}\bigl (U^{\mathsf {T}}A^{-}U \bigr )\). For this choice of g and considering an even matrix A and setting \(\lambda =\alpha -\beta -s\), the theta series \(\vartheta _{\mathrm {O},\mathrm {O},g,A}\) transforms like a non-holomorphic Siegel modular form of weight \(m/2+\lambda \) on a congruence subgroup of \(\varGamma _n\) and with some character, where both depend on the level of A.

These explicit constructions do not only give examples of Siegel modular forms, but by applying Vignéras’ result for genus \(n=1\), we show that we obtain a similar criterion as in [16] to determine whether a Siegel theta series transforms like a modular form:

Theorem 1.5

Let \(\lambda \in {\mathbb {Z}}\) and let \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) such that

$$\begin{aligned} f(U)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}A U)\bigr )\in {\mathcal {S}}({\mathbb {R}}^{m\times n}) \end{aligned}$$

and f is a solution of the \(n\times n\) system of partial differential equations

$$\begin{aligned} \Bigl (\mathbf{E }-\frac{{\Delta }_A}{4\pi }\Bigr )f= \lambda \cdot I\cdot f\qquad \text {with}\quad \mathbf{E }:= U^{\mathsf {T}}\frac{\partial }{\partial {U}}\quad \text {and}\quad {\Delta }_A\text { as defined above}. \end{aligned}$$

For \(H=K=\mathrm {O}\) and A even, the theta series \(\vartheta _{H,K,f,A}\) in Definition 1.3 transforms like a Siegel modular form of genus n and weight \(m/2+\lambda \), where the level and character depend on A.

Remark 1.6

In this paper, we determine the transformation behavior of \(\vartheta _{H,K,f,A}\) with respect to the transformations \(Z\mapsto Z+S\) for a symmetric matrix \(S\in {\mathbb {Z}}^{n\times n}\) (see Lemma 4.2) and \(Z\mapsto -Z^{-1}\) (see Proposition 4.10). The results hold for any \(H,K\in {\mathbb {R}}^{m\times n}\) and we do not generally assume that A is even. By setting further preconditions for HK and A, one can then construct vector-valued Siegel modular forms of genus n and weight \(m/2+\lambda \) on the full Siegel modular group or scalar-valued modular forms on congruence subgroups, see also Remark 2.2. However, we will not explicitly elaborate on that here.

The outline of the paper is as follows: In Sect. 2, we briefly summarize the most important notions about Siegel modular forms that are relevant for this paper. In the next section, we examine the complex vector space formed by the solutions of the \(n\times n\) system of partial differential equations from Theorem 1.5. Under the additional assumption that a solution f must satisfy the growth condition \(f(U)\exp (-\pi {\text {tr}}(U^{\mathsf {T}}AU))\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\), we explicitly determine a basis (which is finite if A is positive or negative definite and infinite otherwise) of this vector space. In Sect. 4, we show that these basis elements can be used to construct theta series of genus n that transform like Siegel modular forms of weight \(m/2+\lambda \). In order to do so, we first construct non-holomorphic theta series for positive definite quadratic forms. With some modifications, this can be generalized to theta series associated with indefinite quadratic forms.

2 Notation and preliminaries

We fix notation and summarize standard results about Siegel modular forms and refer to Andrianov [2, p. 1–25] and Freitag [6] for further details. For convenience, we replicate some definitions of the last section. We also comment on results by Borcherds [4] and Vignéras [15] and point out differences to our set-up.

Denote the Siegel upper half-space by

$$\begin{aligned} {\mathbb {H}}_n :=\lbrace Z=X+iY \mid X,Y \in {{\mathbb {R}}}^{{n}\times {n}}\text { symmetric}, Y \text { positive definite}\rbrace . \end{aligned}$$

We let \(Y^{1/2}\) denote the uniquely determined symmetric positive definite matrix that satisfies \(Y^{1/2}\cdot Y^{1/2}=Y\). The same holds for the square root of A, when A is a positive definite matrix.

We define modular forms on \({\mathbb {H}}_n\) for the full Siegel modular group

$$\begin{aligned} \varGamma _n:=\big \lbrace M=\left( \begin{array}{ll} A&{}B\\ C&{}D \end{array}\right) \in {{\mathbb {Z}}}^{{2n}\times {2n}} \mid M^{\mathsf {T}}JM=J \big \rbrace ,\quad \text {where}\quad J=\left( \begin{array}{ll} \mathrm {O}&{}I_n\\ -I_n&{}\mathrm {O} \end{array}\right) , \end{aligned}$$

which operates on \({\mathbb {H}}_n\) by

$$\begin{aligned} Z\mapsto M\langle Z\rangle =(AZ+B)(CZ+D)^{-1}. \end{aligned}$$

The imaginary part Y of Z and the imaginary part \({\widetilde{Y}}\) of \(M\langle Z\rangle \) satisfy the relation

$$\begin{aligned} (C{\overline{Z}}+D)^{\mathsf {T}}{\widetilde{Y}} (CZ+D)=Y. \end{aligned}$$
(2.1)

In particular, \({\widetilde{Y}}\) is positive definite and symmetric.

Definition 2.1

We call \(F:{\mathbb {H}}_n\longrightarrow {\mathbb {C}}\) a (classical) Siegel modular form of genus n and weight k if the following conditions hold:

  1. (a)

    The function F is holomorphic on \({\mathbb {H}}_n\),

  2. (b)

    For every \(M\in \varGamma _n\) we have \(F(M\langle Z\rangle )=\det (CZ+D)^{k}F(Z)\),

  3. (c)

    |F(Z)| is bounded on domains in \({\mathbb {H}}_n\) of the form \({\mathbb {H}}^{\varepsilon }_n:=\{X+iY\in {\mathbb {H}}_n\mid Y\ge \varepsilon \cdot I\}\) with \(\varepsilon >0\).

Note that the weight is not necessarily an integral number. In this context, we define—as usual—for \(z\in {\mathbb {C}}\) and any non-integer exponent r that \(z^r:=\exp (r\log z)\), where \(\log z=\log |z|+i\arg (z),-\pi <\arg (z)\le \pi \).

Due to the Koecher principle (cf. [6, p. 44f.]), which holds for \(n>1\), all functions satisfying (a) and (b) admit a Fourier expansion over positive semi-definite even symmetric matrices and are in particular bounded on \({\mathbb {H}}^{\varepsilon }_n\) for any \(\varepsilon >0\). So we do not need to impose an analogue of (c) as condition. If we consider non-holomorphic modular forms, the Koecher principle does not necessarily hold anymore. In our case, we build Siegel theta series by using Schwartz functions and obtain absolutely convergent series, so these functions also satisfy condition (c).

Remark 2.2

The full Siegel modular group \(\varGamma _n\) is generated by the matrices \( \bigl (\begin{array}{cc} I_n&{}S\\ \mathrm {O}&{}I_n \end{array}\bigr )\) with \(S=S^{\mathsf {T}}\) and \(\bigl (\begin{array}{cc} \mathrm {O}&{}-I_n\\ I_n&{}\mathrm {O} \end{array}\bigr )\) (cf. [6, p. 322-328]), so any function F with \(F(Z+S)=F(Z)\) for symmetric matrices \(S \in {{\mathbb {Z}}}^{{n}\times {n}}\) and \(F(-Z^{-1})=\det Z^k F(Z)\) satisfies condition (b). For the theta series with characteristics HK that we construct here, we observe the following: Up to a factor depending on H, A and S, we can write \(\vartheta _{{H},{K}}(Z+S)\) as a theta series of the same form but with a slightly changed characteristic \(H,{\widetilde{K}}\), see Lemma 4.2. We can express \(\vartheta _{{H},{K}}(-Z^{-1})\) as a linear combination of theta series \(\vartheta _{{J+K},{-H}}(Z)\), where \(J\in A^{-1}{\mathbb {Z}}^{m\times n}{\text {mod}} {\mathbb {Z}}^{m\times n}\), see Proposition 4.10.

If A is an even unimodular matrix and \(H=K=\mathrm {O}\), the theta series transforms like a modular form on the full group \(\varGamma _n\), see Example 4.8 when A is positive definite and Example 4.11 when A is indefinite (in the last case we might obtain a character of \(\varGamma _n\) as an additional automorphic factor).

If H and K are rational matrices, we can take the series \(\vartheta _{{H},{K}}\) as entries of vector-valued functions, which then define modular forms on the full Siegel modular group. In another approach (see for example Andrianov and Maloletkin [3]), one could consider suitable congruence subgroups of finite index in \(\varGamma _n\).

In Sect. 3 as well as Sect. 4, we will consider a fixed decomposition of the non-degenerate matrix A of signature (rs), so we give a precise description here.

Remark 2.3

Let \(\mathbf {v_{1}},\ldots ,\mathbf {v_{r}}\) denote the eigenvectors that correspond to the positive eigenvalues of A and \(\mathbf {v_{r+1}},\ldots ,\mathbf {v_{m}}\) the ones that correspond to the negative eigenvalues. We normalize these eigenvectors in a suitable way so that for \(S=(\mathbf {v_{1}},\ldots ,\mathbf {v_{m}})\in {{\mathbb {R}}}^{{m}\times {m}}\)

$$\begin{aligned} S^{\mathsf {T}}AS={\mathcal {I}}\quad \text {with}\quad {\mathcal {I}}:= \left( \begin{array}{ll} I_r&{}\mathrm {O}\\ \mathrm {O}&{}-I_s \end{array}\right) . \end{aligned}$$

As \(\lbrace \mathbf {v_{1}},\ldots , \mathbf {v_{m}}\rbrace \) forms a basis of \({\mathbb {R}}^m\), we write any vector \(\mathbf {u}\in {\mathbb {R}}^m\) as \(\mathbf {u}=\sum _{i=1}^r \lambda _i \mathbf {v_{i}}+\sum _{i=r+1}^m \lambda _i \mathbf {v_{i}}\) and define \(\mathbf {u^+}:=\sum _{i=1}^r \lambda _i \mathbf {v_{i}}\) and \(\mathbf {u^-}:=\sum _{i=r+1}^m \lambda _i \mathbf {v_{i}}\).

So for the inverse of S, we have \(A=(S^{-1})^{\mathsf {T}}{\mathcal {I}}S^{-1}\). This enables us to write A as the sum of the positive semi-definite respectively negative semi-definite matrices

$$\begin{aligned} A^{+}:=(S^{-1})^{\mathsf {T}}\left( \begin{array}{ll} I_r&{}\mathrm {O}\\ \mathrm {O}&{}\mathrm {O} \end{array}\right) S^{-1}\quad \text {and}\quad A^{-}:=(S^{-1})^{\mathsf {T}}\left( \begin{array}{ll} \mathrm {O}&{}\mathrm {O}\\ \mathrm {O}&{}-I_s \end{array}\right) S^{-1}. \end{aligned}$$

We also associate the positive definite matrix \(M:=(S^{-1})^{\mathsf {T}}S^{-1}=A^{+}-A^{-}\). If we write \(U\in {\mathbb {R}}^{m\times n}\) as \(U=U^++U^-\), where \(U^+:=(\mathbf {u^+_{1}},\ldots ,\mathbf {u^+_{n}})\) and \(U^-:=(\mathbf {u^-_{1}},\ldots ,\mathbf {u^-_{n}})\), it is straightforward to check that

$$\begin{aligned} {\text {tr}}\bigl ((U^+)^{\mathsf {T}}A U^+\bigr )={\text {tr}}\bigl (U^{\mathsf {T}}A^{+}U \bigr )\quad \text {and}\quad {\text {tr}}\bigl ((U^-)^{\mathsf {T}}A U^-\bigr )={\text {tr}}\bigl (U^{\mathsf {T}}A^{-}U \bigr ). \end{aligned}$$

As our construction of Siegel theta series in Sect. 4 is very similar to Borcherds’ set-up [4] for \(n=1\), we briefly recall his result and point out the main differences.

Remark 2.4

Borcherds considers a non-degenerate quadratic form Q with signature (rs), an even lattice \(L\subset {\mathbb {R}}^m\) with the associated dual lattice \(L'\) and an isometry v mapping \(L\otimes {\mathbb {R}}\) to \({\mathbb {R}}^{r,s}\). Considering the inverse images \(v^+\) and \(v^-\) of \({\mathbb {R}}^{r,0}\) and \({\mathbb {R}}^{0,s}\) under v, one decomposes \(L\otimes {\mathbb {R}}\) in the orthogonal direct sum of a positive definite subspace \(v^+\) and a negative definite subspace \(v^-\). For the projection of \({\lambda } \in L\otimes {\mathbb {R}}\) into \(v^{\pm }\) one writes \({\lambda }_{v^{\pm }}\) and obtains the positive definite quadratic form \(Q_v({\lambda })=Q({\lambda }_{v^+})-Q({\lambda }_{v^-})\). As the decomposition into the subspaces \(v^+\) and \(v^-\) is not unique, Borcherds’ theta series include an additional parameter to indicate the choice of \(v^+\in G(M)\), where the Grassmannian G(M) denotes the set of positive definite r-dimensional subspaces of \(L\otimes {\mathbb {R}}\). For \(z \in {\mathbb {H}}_1,\mathbf {h},\mathbf {k}\in L\otimes {\mathbb {R}},{\gamma }\in L'/L\), \(\Delta \) the Laplacian on \({\mathbb {R}}^m\), and \(p:{\mathbb {R}}^m\longrightarrow {\mathbb {R}}\) a polynomial that is homogeneous of degree \(\alpha \) in the first r variables and homogeneous of degree \(\beta \) in the last s variables, he defines

$$\begin{aligned} \vartheta _{L+{\gamma }}(z,\mathbf {h},\mathbf {k};v,p):= & {} \sum _{{\lambda }\in L+{\gamma }} \exp (-\Delta /8\pi y)(p)\bigl (v({\lambda }+\mathbf {h})\bigr )\\&\cdot e\bigl (z ({\lambda }+\mathbf {h})^2_{v^+}/2+{\overline{z}}({\lambda }+\mathbf {h})^2_{v^-}/2-({\lambda }+\mathbf {h}/2,\mathbf {k})\bigr ) \end{aligned}$$

and shows that this is a non-holomorphic modular form of weight \((r/2+\alpha ,s/2+\beta )\).

In the present paper, we fix the decomposition \(A=A^{+}+A^{-}\) and the majorant matrix \(M=A^{+}- A^{-}\) by taking the eigenvectors of A as a basis in \({\mathbb {R}}^m\). Then \(U\in {\mathbb {R}}^{m\times n}\) is projected onto \(U^+\) in the positive definite subspace and \(U^-\) in the negative definite subspace. However, choosing any other decomposition of A into a negative and a positive definite part leads to an analogous construction.

In Definition 1.3, we represented \(\vartheta _{{H},{K}}\) such that the analogy with Vignéras’ construction (see Remark 2.5) is visible. We can also write these theta series as

$$\begin{aligned} \vartheta _{{H},{K}}(Z)= & {} \sum _{U\in H+ {\mathbb {Z}}^{m\times n}}\exp (-{\text {tr}}({\Delta }_MY^{-1})/8\pi )\bigl (P(U)\bigr )\\&\cdot e\bigl ({\text {tr}}(U^{\mathsf {T}}A^{+}UZ)/2+{\text {tr}}(U^{\mathsf {T}}A^{-}U{\overline{Z}})/2+{\text {tr}}(K^{\mathsf {T}}AU)\bigr ), \end{aligned}$$

which resembles Borcherds’ construction. Note that we can multiply the series by \(\det Y^{s/2+\beta }\) to obtain the weight \(m/2+\lambda \) (where \(\lambda =\alpha -\beta -s\)) instead of \((r/2+\alpha ,s/2+\beta )\).

We conclude this section by reviewing Vignéras’ construction [16] and addressing essential differences.

Remark 2.5

Vignéras considers theta series of genus 1

$$\begin{aligned} \vartheta _{{\mathbf {0}},{\mathbf {0}}}(z)=y^{-\lambda /2}\sum \limits _{\mathbf {u}\in L}f(\mathbf {u}\sqrt{y}){\text {e}}\bigl ( Q(\mathbf {u})z\bigr ), \end{aligned}$$

where \(L\subset {\mathbb {R}}^m\) denotes a lattice, \(Q(\mathbf {u})=\frac{1}{2}\mathbf {u}^{\mathsf {T}}A \mathbf {u}\) a quadratic form of signature (rs) and \(z=x+iy\) an element of the upper half-plane \({\mathbb {H}}_1\). The following two requirements are imposed on the function f: Set \({\widetilde{f}}(\mathbf {u})=f(\mathbf {u})\exp \bigl (-2\pi Q(\mathbf {u})\bigr )\). Then for any polynomial \(p:{\mathbb {R}}^{m}\longrightarrow {\mathbb {R}}\) with \(\deg (p)\le 2\) and any partial derivative \(\partial ^{\alpha }\) with \(|\alpha |\le 2\),

$$\begin{aligned}p\cdot {\widetilde{f}} \in {\mathcal {L}}^1({\mathbb {R}}^{m})\cap {\mathcal {L}}^2({\mathbb {R}}^{m})\quad \text {and}\quad \partial ^{\alpha }{\widetilde{f}} \in {\mathcal {L}}^1({\mathbb {R}}^{m})\cap {\mathcal {L}}^2({\mathbb {R}}^{m}).\end{aligned}$$

Furthermore, f satisfies the differential equation of second order

$$\begin{aligned} \Bigl (E-\frac{\Delta _A}{4\pi }\Bigr )f=\lambda \cdot f\qquad \text {with}\quad E:=\sum _{d=1}^m u_d\frac{\partial }{\partial {u_d}}\quad \text {and}\quad \Delta _A:=\sum \limits _{a,b=1}^m\frac{\partial }{\partial u_{a}} (A^{-1})_{ab} \frac{\partial }{\partial u_{b}}. \end{aligned}$$

Then \(\vartheta _{{\mathbf {0}},{\mathbf {0}}}\) transforms like a modular form of weight \(m/2+\lambda \).

For higher genus \(n\in {\mathbb {N}}\), we introduce some notation to formulate an analogous growth condition. For \(p\in [1,\infty )\) let \({\mathcal {L}}^p({\mathbb {R}}^{m\times n})\) denote the Lebesgue space of functions \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {C}}\) for which

$$\begin{aligned} \Vert f\Vert _p:=\Biggl (\int \limits _{{\mathbb {R}}^{m\times n}}|f(U)|^pdU\Biggr )^{1/p} \end{aligned}$$

is finite. We use the usual multi-index notation on \({\mathbb {R}}^{m\times n}\), where \(\alpha \in {\mathbb {N}}_0^{m\times n}\) with \(|\alpha |=\sum _{i=1}^{m}\sum _{j=1}^{n} \alpha _{ij}\), so

$$\begin{aligned} U^{\alpha }=\prod \limits _{\begin{array}{c} 1\le i\le m\\ 1\le j\le n \end{array}} U_{ij}^{\alpha _{ij}}\quad \text {and}\quad \partial ^{\alpha }=\prod \limits _{\begin{array}{c} 1\le i\le m\\ 1\le j\le n \end{array}}\Bigl (\frac{\partial }{\partial U_{ij}}\Bigr )^{\alpha _{ij}}. \end{aligned}$$

For \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\), one sets \({\widetilde{f}} (U):=f(U)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}A U)\bigr )\) and – analogously to Vignéras—assumes that for any polynomial \(p:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) with \(\deg (p)\le 2\) and any partial derivative \(\partial ^{\alpha }\) with \(|\alpha |\le 2\),

$$\begin{aligned} p\cdot {\widetilde{f}} \in {\mathcal {L}}^1({\mathbb {R}}^{m\times n})\cap {\mathcal {L}}^2({\mathbb {R}}^{m\times n})\quad \text {and}\quad \partial ^{\alpha }{\widetilde{f}} \in {\mathcal {L}}^1({\mathbb {R}}^{m\times n})\cap {\mathcal {L}}^2({\mathbb {R}}^{m\times n}). \end{aligned}$$
(2.2)

This allows us to apply Vignéras’ result for theta series of genus 1 (as we make use of the fact that Hermite functions build an orthogonal basis of \({\mathcal {L}}^2\)-functions) and the Poisson summation formula.

However, for simplification, we replace assumption (2.2) by the more restrictive assumption that \({\widetilde{f}}\) is a Schwartz function.

3 A generalization of Vignéras’ differential equation

To derive an analogue of Vignéras’ result for Siegel modular forms of higher genus \(n\in {\mathbb {N}}\), we introduce matrix-valued operators generalizing E and \(\Delta _A\).

Definition 3.1

For \(U\in {\mathbb {R}}^{m\times n}\) let \(\partial /\partial U=\bigl (\partial /\partial U_{\mu \nu }\bigr )_{1\le \mu \le m,1\le \nu \le n}\). We define the generalized Euler operator

$$\begin{aligned}\mathbf{E }:= U^{\mathsf {T}}\frac{\partial }{\partial {U}}\quad \text {with}\quad \mathbf{E }_{ij}=\sum \limits _{d=1}^m U_{di}\frac{\partial }{\partial U_{dj}}\quad (1\le i\le n,\,1\le j\le n)\end{aligned}$$

and the generalized Laplace operator associated with A

$$\begin{aligned} {\Delta }_A := \Bigl (\frac{\partial }{\partial U}\Bigr )^{\mathsf {T}}A^{-1}\frac{\partial }{\partial U}\quad \text {with}\quad ({\Delta }_A)_{ij}=\sum \limits _{a,b=1}^m \frac{\partial }{\partial U_{ai}}(A^{-1})_{ab}\frac{\partial }{\partial U_{bj}}. \end{aligned}$$

For the normalized Laplacian \({\Delta }_I\) we simply write \({\Delta }\). Further, we set

$$\begin{aligned} {\mathcal {D}}_A:=\mathbf{E }-\frac{{\Delta }_A}{4\pi }. \end{aligned}$$

The \(n\times n\) system of partial differential equations

$$\begin{aligned} {\mathcal {D}}_A f= \lambda \cdot I\cdot f\quad \text {for }\lambda \in {\mathbb {Z}}\text { and }A \text { indefinite of signature }(r,s) \end{aligned}$$
(3.1)

is a direct generalization of the set-up in [16]. In this section, we examine the complex vector space formed by the solutions f of (3.1) that additionally satisfy the growth condition \(f(U)\exp (-\pi {\text {tr}}(U^{\mathsf {T}}AU))\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\). We explicitly determine a basis (which is finite if A is positive or negative definite and infinite otherwise) of this vector space.

3.1 Functions with a homogeneity property

As mentioned in the introduction, we employ polynomials with a certain homogeneity property to construct Siegel theta series. In the following, we introduce the complex vector space of all functions with this homogeneity property. For a differentiable function f, we show in Proposition 3.4 that f is homogeneous of degree \(\alpha \) if and only if f solves the system of partial differential equations \(\mathbf{E }f=\alpha \cdot I\cdot f\). Further, we show in Lemma 3.5 that for a polynomial function p it is already sufficient that \(\mathbf{E }p=C\cdot p\) holds for some \(C\in {\mathbb {C}}^{n\times n}\) to deduce that p is a homogeneous function.

Definition 3.2

For \(\alpha \in {\mathbb {N}}_0\), \(m,n\in {\mathbb {N}}\), we define the complex vector space

$$\begin{aligned} {\mathcal {F}}_{\alpha }^{m, n}:=\lbrace f: {\mathbb {C}}^{m\times n}\longrightarrow {\mathbb {C}}\mid f \text { continuous},\, f(UN)=\det N^\alpha f(U)\text { for all }N \in {{\mathbb {C}}}^{{n}\times {n}}\rbrace . \end{aligned}$$

For \(n=1\), this is the usual definition of a homogeneous function of non-negative degree. As a subspace, we consider all polynomials of this class, which is the space \({\mathcal {P}}_{\alpha }^{m,n}\) from the introduction.

Remark 3.3

The vector space \({\mathcal {P}}_{\alpha }^{m,n}\) is described by Maass [12]. He determines the structure of \({\mathcal {P}}_{\alpha }^{m,n}\), shows that it has finite dimension and even gives an explicit formula for the dimension. In the following, \({\mathcal {B}}_{\alpha }^{m,n}\) denotes a finite basis of \({\mathcal {P}}_{\alpha }^{m,n}\). We state some observations to show that we obtain non-trivial examples.

  • For \(m<n\), we have \({\mathcal {F}}_{\alpha }^{m, n}= {\mathbb {C}}\): We take \(U\in {\mathbb {R}}^{m\times n}\) such that \(f(U)\ne 0\). One can multiply elementary matrices from the right such that U is in reduced column echelon form. If U has less rows than columns, at least the last column is a zero column. Setting \(N={\text {diag}}(1,\ldots ,1,\lambda )\) with \(\lambda \notin \lbrace 0,1\rbrace \), leads to the identity \(f(U)=\lambda ^\alpha f(U)\), which is only satisfied for \(\alpha =0\). The orbit of the right action of invertible matrices on \(U\in {\mathbb {C}}^{m\times n}\) is dense and f is continuous, so f is a constant function.

  • Note that \(f\cdot g\in {\mathcal {F}}_{\alpha +\beta }^{m, n}\) for \(f\in {\mathcal {F}}_{\alpha }^{m, n},g\in {\mathcal {F}}_{\beta }^{m, n}\) and \(f+ g\in {\mathcal {F}}_{\alpha }^{m, n}\) for \(f,g\in {\mathcal {F}}_{\alpha }^{m, n}\).

  • For \(m\ge n\) let \({\widetilde{U}}\in {{\mathbb {C}}}^{{n}\times {n}}\) be a square submatrix of maximal size of \(U\in {\mathbb {C}}^{m\times n}\). Clearly, we have \(\det {\widetilde{U}}^{\alpha }\in {\mathcal {P}}_{\alpha }^{m,n}\). Due to this and by picking up on the previous point, we obtain all functions in \({\mathcal {P}}_{\alpha }^{m,n}\) by taking the product of \(\alpha \) (possibly different) \(n\times n\)–minors \(\det {\widetilde{U}}\) and linear combinations thereof.

Homogeneous functions that are also differentiable are characterized by the identity \(Ef=\alpha \cdot f\). We observe that this statement can be generalized.

Proposition 3.4

Let \(f: {\mathbb {C}}^{m\times n}\longrightarrow {\mathbb {C}}\) be a differentiable function. We have \(f\in {\mathcal {F}}_{\alpha }^{m, n}\) if and only if \(\mathbf{E }f=\alpha \cdot I\cdot f\).

Proof

For \(U\in {\mathbb {C}}^{m\times n}\) and \(N\in {\mathbb {C}}^{n\times n}\), the derivative of the entry \((UN)_{k\ell }=\sum \limits _{\nu =1}^n U_{k\nu }N_{\nu \ell }\) with \(1\le k\le m,\,1\le \ell \le n\) is

$$\begin{aligned} \frac{\partial (UN)_{k\ell }}{\partial N_{ij}}={\left\{ \begin{array}{ll} 0&{}\text { if }j \ne \ell ,\\ U_{ki}&{}\text { if }j=\ell . \end{array}\right. } \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{\partial }{\partial N_{ij}} \bigl (f(UN)\bigr )=\sum \limits _{k=1}^m \sum \limits _{\ell =1}^n \frac{\partial f}{\partial U_{k\ell }}(UN)\frac{\partial (UN)_{k\ell }}{\partial N_{ij}}=\sum \limits _{k=1}^m U_{ki}\frac{\partial f}{\partial U_{kj}} (UN). \end{aligned}$$

Hence, we obtain for the derivative of f(UN) with respect to N that

$$\begin{aligned} \frac{\partial }{\partial N} \bigl (f(UN)\bigr )=U^{\mathsf {T}}\frac{\partial f}{\partial U} (UN) \end{aligned}$$

and by the definition of the generalized Euler operator \(\mathbf{E }\) with respect to U it is

$$\begin{aligned} (\mathbf{E }f)(UN)=(UN)^{\mathsf {T}}\frac{\partial f}{\partial U}(UN)=N^{\mathsf {T}}\frac{\partial }{\partial N} \bigl (f(UN)\bigr ). \end{aligned}$$
(3.2)

The adjugate matrix \({\text {adj}}(N)\in {{\mathbb {C}}}^{{n}\times {n}}\) is defined as \(\bigl ({\text {adj}}(N)\bigr )_{ij}:=(-1)^{i+j} \det {{\widetilde{N}}}_{ji},\) where \({{\widetilde{N}}}_{ji}\) denotes the \((n-1)\times (n-1)\)-matrix obtained by deleting the j-th row and i-th column. Laplace expansion of the determinant gives

$$\begin{aligned} \frac{\partial }{\partial N_{ij}} (\det N)=\frac{\partial }{\partial N_{ij}} \biggl ( \sum \limits _{k=1}^n(-1)^{i+k}N_{ik}\det {{\widetilde{N}}}_{ik} \biggr )=(-1)^{i+j}\det {{\widetilde{N}}}_{ij}. \end{aligned}$$

Hence, the derivate of the determinant of N is the transpose of the adjugate matrix:

$$\begin{aligned} \frac{\partial }{\partial N} (\det N) = {\text {adj}}(N)^{\mathsf {T}}\end{aligned}$$
(3.3)

For \(f\in {\mathcal {F}}_{\alpha }^{m, n}\) the identity \(f(UN)=\det N^\alpha f(U)\) holds for all \(N\in {{\mathbb {C}}}^{{n}\times {n}}\). From equations (3.2) and (3.3) it follows

$$\begin{aligned} \bigl (\mathbf{E }f\bigr )(UN)= & {} N^{\mathsf {T}}\frac{\partial }{\partial N} \bigl (f(UN)\bigr )=N^{\mathsf {T}}\frac{\partial }{\partial N}\bigl (\det N^\alpha f(U)\bigr )\\= & {} \alpha \det N^{\alpha -1} \cdot N^{\mathsf {T}}{\text {adj}}(N)^{\mathsf {T}}\cdot f(U)=\alpha \det N^\alpha \cdot I\cdot f(U), \end{aligned}$$

since \({\text {adj}}(N) N=\det N\cdot I\). We set \(N=I\) and obtain the identity \(\mathbf{E }f=\alpha \cdot I\cdot f\).

To show the other implication, notice that \(f(UN)(\det N)^{-\alpha }\) is constant with respect to N if f satisfies \(\mathbf{E }f=\alpha \cdot I\cdot f\): Using the identities (3.2) and (3.3), we obtain

$$\begin{aligned} N^{\mathsf {T}}\frac{\partial }{\partial N} \bigl ( f(UN) \det N^{-\alpha }\bigr )\, =\det N^{-\alpha } \cdot \bigl (\mathbf{E }f\bigr )(UN)-\alpha \det N^{-\alpha }\cdot I\cdot f(UN)=\mathrm {O}. \end{aligned}$$

Thus, we have \(f(UN) \det N^{-\alpha }=C(U),\) where C is independent of N. For \(N=I\), this is f(U), and hence we conclude \(f(UN)=\det N^\alpha f(U)\). \(\square \)

Later we will only consider polynomial solutions. In this case, we can state the following lemma, which can be left aside for now, but will be used in the proof of Proposition 3.12.

Lemma 3.5

Let \(p:{\mathbb {C}}^{m\times n}\longrightarrow {\mathbb {C}}\) be a polynomial that solves the system of partial differential equations

$$\begin{aligned} \mathbf{E }p=C\cdot p\quad (C\in {\mathbb {C}}^{n\times n}). \end{aligned}$$
(3.4)

If p is not the zero function, the matrix C has the form \(C=\alpha \cdot I\) for some \(\alpha \in {\mathbb {N}}_0\).

Proof

First, we examine the case \(m=n=2\) and write \(U=\left( \begin{array}{ll} a&{}b\\ c&{}d \end{array}\right) \) and

$$\begin{aligned} p(a,b,c,d)=\sum \limits _{\alpha ,\beta ,\gamma ,\delta \in {\mathbb {N}}_0} c_{\alpha ,\beta ,\gamma ,\delta }a^{\alpha } b^\beta c^\gamma d^\delta \quad \text {with}\quad c_{\alpha ,\beta ,\gamma ,\delta }\in {\mathbb {C}}. \end{aligned}$$

By assumption, p satisfies the \(2\times 2\)-system of partial differential equations

$$\begin{aligned} \begin{pmatrix} a&{}c\\ b&{}d \end{pmatrix} \begin{pmatrix} \frac{\partial }{\partial a}&{}\frac{\partial }{\partial b}\\ \frac{\partial }{\partial c}&{}\frac{\partial }{\partial d} \end{pmatrix}p=\begin{pmatrix} C_{11}&{}C_{12}\\ C_{21}&{}C_{22} \end{pmatrix}\cdot p. \end{aligned}$$
(3.5)

Considering the upper left equation, we have

$$\begin{aligned} \Bigl (a\frac{\partial }{\partial a}+c\frac{\partial }{\partial c}\Bigr )p= \sum \limits _{\alpha ,\beta ,\gamma ,\delta \in {\mathbb {N}}_0} (\alpha +\gamma )c_{\alpha ,\beta ,\gamma ,\delta }a^{\alpha } b^\beta c^\gamma d^\delta =C_{11}\cdot p, \end{aligned}$$

thus \(\alpha +\gamma =C_{11}\). Analogously, we deduce by the bottom right equation that \(\beta +\delta =C_{22}\) holds. As p is a polynomial, \(C_{11}\) and \(C_{22}\) denote non-negative integers. We write \(C_{11}=k\) and \(C_{22}=\ell \) from now on. We have shown that p is homogeneous (in the original sense) of degree k in the variables of the first column ac and homogeneous of degree \(\ell \) in the variables of the last column bd. It is easy to see that \(C_{12}=C_{21}=0\) holds: By assumption, the polynomial p satisfies the upper right equation

$$\begin{aligned} \Bigl (a\frac{\partial }{\partial b}+c\frac{\partial }{\partial d}\Bigr )p=C_{12}\cdot p. \end{aligned}$$
(3.6)

As the left-hand side is a polynomial, homogeneous of degree \(k+1\) in ac and of degree \(\ell -1\) in bd, and the right-hand side is a multiple of p, i. e. homogeneous of degree k and \(\ell \), we deduce that \(C_{12}\) must equal zero. Analogously, we conclude by the bottom left equation of (3.5) that \(C_{21}=0\).

It remains to be shown that \(k=\ell \) holds. We write

$$\begin{aligned} p(a,b,c,d)=\sum _{\alpha +\gamma =k} a^\alpha c^\gamma p_{\alpha ,\gamma }(b,d), \end{aligned}$$

where \(p_{\alpha ,\gamma }\) denote homogeneous polynomials in bd of degree \(\ell \). Then equation (3.6) with \(C_{12}=0\) has the form

$$\begin{aligned} \sum \limits _{\alpha +\gamma =k} a^{\alpha +1}c^\gamma \frac{\partial }{\partial b}p_{\alpha ,\gamma }(b,d)+\sum \limits _{\alpha +\gamma =k} a^{\alpha }c^{\gamma +1}\frac{\partial }{\partial d}p_{\alpha ,\gamma }(b,d)\equiv 0. \end{aligned}$$

We obtain by comparison of the coefficients of \(a^\nu c^\mu ,\, 0\le \nu \le k+1,\,\mu =k+1-\nu \):

$$\begin{aligned} \frac{\partial }{\partial d}p_{0,k}(b,d)\equiv & {} 0\qquad \text {and}\qquad \frac{\partial }{\partial d}p_{\alpha ,\gamma }(b,d)=-\frac{\partial }{\partial b}p_{\alpha -1,\gamma +1}(b,d)\\&\text {for}\quad 1\le \alpha \le k,\,\gamma =k-\alpha \end{aligned}$$

Thus, we recursively determine the structure of \(p_{\alpha ,\gamma }\) to be \(p_{\alpha ,\gamma }(b,d)=\sum _{r=0}^\alpha e_rb^{\ell -r}d^r\) with \(e_r\in {\mathbb {R}}\). In particular, we see that the exponent of d does not exceed \(\alpha \), i. e. \(\delta \le \alpha \).

We make use of the symmetric structure of the polynomial p and exchange a and c and also b and d in the equations above. Then we obtain \(\beta \le \gamma \). By interchanging a and d along with their exponents \(\alpha \) and \(\delta \) as well as b and c along with their exponents \(\beta \) and \(\gamma \) and using the bottom left equation of (3.5), we obtain \(\alpha \le \delta \) and \(\gamma \le \beta \). We have shown \(\alpha =\delta \) and \(\gamma =\beta \), and in particular, \(k=\ell \) holds.

For generic \(m,n\in {\mathbb {N}}\), we reduce the \(n\times n\)-system \(\mathbf{E }p=C\cdot p\) to the case \(m=n=2\). We write \(U=(\mathbf {u_{1}},\ldots ,\mathbf {u_{n}})\) with \(\mathbf {u_{i}}\in {\mathbb {C}}^m\) and choose \(N\in {\mathbb {C}}^{n\times n}\) such that the i-th column of U is substituted by \(a\mathbf {u_{i}}+c\mathbf {u_{j}}\) and the j-th column by \(b\mathbf {u_{i}}+d\mathbf {u_{j}}\), where we assume that \(i<j\), i. e. we have

$$\begin{aligned} UN=(\mathbf {u_{1}},\ldots ,a\mathbf {u_{i}}+c\mathbf {u_{j}},\ldots ,b\mathbf {u_{i}}+d\mathbf {u_{j}},\ldots ,\mathbf {u_{n}}). \end{aligned}$$

A simple calculation yields

$$\begin{aligned} \begin{pmatrix} a\frac{\partial }{\partial a}+c\frac{\partial }{\partial c}&{}a\frac{\partial }{\partial b}+c\frac{\partial }{\partial d}\\ b\frac{\partial }{\partial a}+d\frac{\partial }{\partial c}&{}b\frac{\partial }{\partial b}+d\frac{\partial }{\partial d} \end{pmatrix} \bigl (p(UN)\bigr )= \left( \begin{pmatrix} \mathbf{E }_{ii}&{}\mathbf{E }_{ij}\\ \mathbf{E }_{ji}&{}\mathbf{E }_{jj} \end{pmatrix} p\right) (UN). \end{aligned}$$

As p solves (3.4) by assumption, we have

$$\begin{aligned} \begin{pmatrix} a&{}c\\ b&{}d \end{pmatrix} \begin{pmatrix} \frac{\partial }{\partial a}&{}\frac{\partial }{\partial b}\\ \frac{\partial }{\partial c}&{}\frac{\partial }{\partial d} \end{pmatrix} \bigl (p(UN)\bigr )= \begin{pmatrix} C_{ii}&{}C_{ij}\\ C_{ji}&{}C_{jj} \end{pmatrix}\cdot p(UN). \end{aligned}$$

For \(2\times 2\)-systems of this form we have shown above that \(C_{ii}=C_{jj}=\alpha \) for some \(\alpha \in {\mathbb {N}}_0\) and \(C_{ij}=C_{ji}=0\). As we can choose any \(i,j\in \lbrace 1,\ldots , n\rbrace \) with \(i< j\), we deduce the claim. \(\square \)

3.2 Description of theta series with modular transformation behavior by partial differential equations

In this section, we show the connection between the functions with the homogeneity property that was described in the last section and the functions that are employed in Sect. 4 to construct modular Siegel theta series. Moreover, we apply Vignéras’ result for \(n=1\) to explicitly give a basis for the vector space of solutions of (3.1) under the additional growth condition.

First, we state a lemma that holds for any symmetric non-degenerate matrix A of signature (rs). Namely, we compute the commutator of the k-th power of the Laplacian \(({\text {tr}}{\Delta }_A)^k\) (we will drop the brackets and write \({\text {tr}}{\Delta }_A^k\) for simplicity) and the Euler operator.

Lemma 3.6

The commutator of \(\mathbf{E }_{ij}\,(1\le i \le n,\,1\le j\le n)\) and \({\text {tr}}{\Delta }_A^k\) is

$$\begin{aligned} \bigl [\mathbf{E }_{ij},{\text {tr}}{\Delta }_A^k\bigr ]:=\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A^k - {\text {tr}}{\Delta }_A^k \cdot \mathbf{E }_{ij}= - 2 k\cdot ({\Delta }_A)_{ij} \cdot {\text {tr}}{\Delta }_A^{k-1}\quad (k\in {\mathbb {N}}). \end{aligned}$$

Proof

We show the claim by induction on k. For \(k=1\) one calculates the commutator of \({\text {tr}}{\Delta }_A\) and \(\mathbf{E }_{ij}\). By definition we have

$$\begin{aligned}{\text {tr}}{\Delta }_A \cdot \mathbf{E }_{ij}= \sum \limits _{c=1}^n\sum \limits _{a,b,d=1}^m \frac{\partial }{\partial U_{ac}} (A^{-1})_{ab} \frac{\partial }{\partial U_{bc}} U_{di} \frac{\partial }{\partial U_{dj}},\end{aligned}$$

which we can write—denoting by \(\delta _{ij}\) the Kronecker delta – as

$$\begin{aligned} \sum \limits _{c=1}^n&\sum \limits _{a,b,d=1}^m (A^{-1})_{ab}\left( U_{di} \frac{\partial ^3}{\partial U_{ac}\partial U_{bc} \partial U_{dj}}+ \delta _{ad} \delta _{ci} \frac{\partial ^2}{\partial U_{bc} \partial U_{dj}}+ \delta _{bd} \delta _{ci} \frac{\partial ^2}{\partial U_{ac} \partial U_{dj}} \right) \\&\quad = \sum \limits _{c=1}^n\sum \limits _{a,b,d=1}^m U_{di} \frac{\partial }{\partial U_{dj}}(A^{-1})_{ab} \frac{\partial ^2}{\partial U_{ac}\partial U_{bc}}\\&\qquad +\sum \limits _{a,b=1}^m (A^{-1})_{ab}\left( \frac{\partial ^2}{\partial U_{aj} \partial U_{bi}} + \frac{\partial ^2}{\partial U_{bj} \partial U_{ai}}\right) . \end{aligned}$$

Since \(A^{-1}\) is symmetric, this is \(\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A + 2 \cdot ({\Delta }_A)_{ij}\). The operators \(({\Delta }_A)_{ij}\) and \({\text {tr}}{\Delta }_A\) commute; thus we deduce for \(k\mapsto k+1\)

$$\begin{aligned} {\text {tr}}{\Delta }_A^{k+1}\cdot \mathbf{E }_{ij}&= {\text {tr}}{\Delta }_A\cdot \bigl (\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A^k+ 2 k\cdot ({\Delta }_A)_{ij} \cdot {\text {tr}}{\Delta }_A^{k-1} \bigr ) \\&=\bigl ( \mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A+2\cdot ({\Delta }_A)_{ij} \bigr )\cdot {\text {tr}}{\Delta }_A^k+2k\cdot ({\Delta }_A)_{ij}\cdot {\text {tr}}{\Delta }_A^k \\&=\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A^{k+1}+2 (k+1)\cdot ({\Delta }_A)_{ij}\cdot {\text {tr}}{\Delta }_A^k. \end{aligned}$$

We can now conclude that all solutions of (3.1) can be ascribed to functions that have the homogeneity property of degree \(\lambda \) by applying the previous lemma and Proposition 3.4.

Lemma 3.7

Let \(f,g:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) denote functions for which \(\exp (c_1{\text {tr}}{\Delta }_A )f\) and \(\exp ( c_2{\text {tr}}{\Delta }_A)g\) are well-defined for any \(c_1,c_2\in {\mathbb {R}}\) (we apply this result to polynomials f and g in the following, hence these conditions make sense). Moreover, we assume that f and g are related by \(f=\exp (-{\text {tr}}{\Delta }_A/8\pi )g\). Then f is a solution of (3.1) if and only if g satisfies \(\mathbf{E }g=\lambda \cdot I\cdot g\), i. e. \(g\in {\mathcal {F}}_{\lambda }^{m, n}\).

Proof

We set \(c:=-1/8\pi \) to shorten notation. For

$$\begin{aligned} f(U)=\exp (c{\text {tr}}{\Delta }_A)\bigl (g(U)\bigr )=\sum \limits _{k=0}^\infty \frac{c^k}{k!}{\text {tr}}{\Delta }_A^k \bigl (g(U)\bigr )\quad (U\in {\mathbb {R}}^{m\times n}) \end{aligned}$$

we consider the entry (ij) for \(1\le i \le n,1\le j\le n\) of the system of partial differential equations (3.1):

$$\begin{aligned} \mathbf{E }_{ij}f+2c\cdot ({\Delta }_A)_{ij} f&=\sum \limits _{k=0}^\infty \frac{c^k}{k!}\bigl (\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A^k \bigr ) g+2\sum \limits _{k=0}^\infty \frac{c^{k+1}}{k!}\bigl ( ({\Delta }_A)_{ij} \cdot {\text {tr}}{\Delta }_A^k \bigl ) g \\&=\mathbf{E }_{ij}g+\sum \limits _{k=1}^\infty \frac{c^k}{k!}\bigl (\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A^k \bigr ) g+2\sum \limits _{k=1}^\infty \frac{c^k}{(k-1)!}\bigl ( ({\Delta }_A)_{ij} \cdot {\text {tr}}{\Delta }_A^{k-1} \bigl ) g \\&= \mathbf{E }_{ij}g+\sum \limits _{k=1}^\infty \frac{c^k}{k!}\bigl (\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A^k+2k\cdot ({\Delta }_A)_{ij}\cdot {\text {tr}}{\Delta }_A^{k-1} \bigr )g \end{aligned}$$

Due to Lemma 3.6, we have

$$\begin{aligned} \bigl (\mathbf{E }_{ij}\cdot {\text {tr}}{\Delta }_A^k+2k\cdot ({\Delta }_A)_{ij}\cdot {\text {tr}}{\Delta }_A^{k-1} \bigr )g=\bigl ({\text {tr}}{\Delta }_A^k\cdot \mathbf{E }_{ij}\bigr )g \end{aligned}$$

and therefore obtain

$$\begin{aligned} \mathbf{E }_{ij}f+2c\cdot ({\Delta }_A)_{ij} f=\sum \limits _{k=0}^\infty \frac{c^k}{k!}{\text {tr}}{\Delta }_A^k\bigl ( \mathbf{E }_{ij}g\bigr )=\exp \bigl (c{\text {tr}}{\Delta }_A\bigr )(\mathbf{E }_{ij}g). \end{aligned}$$

If \(\mathbf{E }_{ij}g=\lambda \cdot \delta _{ij}\cdot g\) holds, the right-hand side equals \(\lambda \cdot \delta _{ij}\cdot f\). As we have \(g=\exp (-c{\text {tr}}{\Delta }_A)f\) (see Property 4.3), we deduce that \(\exp (c{\text {tr}}{\Delta }_A)(\mathbf{E }_{ij}g)=\lambda \cdot \delta _{ij}\cdot f\) implies \(\mathbf{E }_{ij}g=\lambda \cdot \delta _{ij}\cdot g\). By Proposition 3.4, this is equivalent to \(g\in {\mathcal {F}}_{\lambda }^{m, n}\). \(\square \)

In the next proposition, we consider (3.1) for positive definite matrices A, namely the system of partial differential equations

$$\begin{aligned} {\mathcal {D}}_Ap=\alpha \cdot I\cdot p\quad \text {for }\alpha \in {\mathbb {N}}_0\text { and }A\text { positive definite.} \end{aligned}$$
(3.7)

We determine a finite basis of all solutions of (3.7) by additionally imposing a certain growth condition. Together with Proposition 4.7, where it is shown that theta series associated with these functions transform like modular forms, we obtain Theorem 1.5 for positive definite matrices A. In the proof, we employ Vignéras’ result [16] and the fact that we can explicitly construct a finite basis \({\mathcal {B}}_{\alpha }^{m,n}\) of \({\mathcal {P}}_{\alpha }^{m,n}\) due to Maass’ result [12].

Proposition 3.8

Let \({\mathcal {B}}_{\alpha }^{m,n}\) denote a finite basis of \({\mathcal {P}}_{\alpha }^{m,n}\) and let \(A\in {\mathbb {Z}}^{m\times m}\) denote a positive definite symmetric matrix. Every solution \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) of (3.7) that additionally satisfies the growth condition \({\widetilde{f}}(U):=f(U)\exp \bigl ( -\pi {\text {tr}}(U^{\mathsf {T}}A U) \bigr )\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\) is a polynomial. Moreover, a finite basis of this space of solutions is given by

$$\begin{aligned} p=\exp \bigl (-{\text {tr}}{\Delta }_A/8\pi \bigr ) P\quad \text {with } P\in {\mathcal {B}}_{\alpha }^{m,n}. \end{aligned}$$

Proof

We give a short review of Vignéras’ reasoning and apply it to functions with matrix variables. We identify \({\mathbb {R}}^{m\times n}\) with \({\mathbb {R}}^{m n}\) by writing \({\mathbb {R}}^{m\times n}\ni U=(\mathbf {u_{1}},\ldots ,\mathbf {u_{n}}),\) \(\mathbf {u_{i}}\in {\mathbb {R}}^m,\) as column vector

$$\begin{aligned} \mathbf {u}=\begin{pmatrix} \mathbf {u_{1}}\\ \vdots \\ \mathbf {u_{n}} \end{pmatrix}\in {\mathbb {R}}^{mn}. \end{aligned}$$

If f satisfies the system of differential equations (3.7) for \(c=-1/8\pi \), it follows in particular that

$$\begin{aligned} {\text {tr}}({\mathcal {D}}_A)f=\alpha n\cdot f \end{aligned}$$
(3.8)

holds. We have

$$\begin{aligned} {\text {tr}}\mathbf{E }=\sum _{i=1}^{n}\sum _{d=1}^{m}U_{di}\frac{\partial }{\partial U_{di}}\quad \text {and}\quad {\text {tr}}{\Delta }_A =\sum \limits _{\nu =1}^n \sum \limits _{\mu ,\rho =1}^m \frac{\partial }{\partial U_{\mu \nu }}(A^{-1})_{\mu \rho }\frac{\partial }{\partial U_{\rho \nu }}, \end{aligned}$$

which are the usual Euler operator on \({\mathbb {R}}^{m n}\) and the Laplacian associated with the positive definite \(mn\times mn\)- matrix that consists of blocks of \(m\times m\)-matrices that are zero except for n copies of A on the diagonal. We write U in a suitable basis such that we consider the quadratic form \((S^{-1})^{\mathsf {T}}A S^{-1}=I\) to express \({\widetilde{f}}\) in an orthogonal basis of Hermite functions \(H_{\mathbf {k}}\) in mn variables as

$$\begin{aligned} {\widetilde{f}}=\sum _{\mathbf {k}\in {\mathbb {N}}_0^{mn}} c_{\mathbf {k}}H_{\mathbf {k}}\quad \text {with}\quad c_{\mathbf {k}}\in {\mathbb {R}}\text { and } \mathbf {k}=(k_{\mu \nu })_{1\le \mu \le m, 1\le \nu \le n}, \end{aligned}$$

where the Hermite functions in several variables are defined in terms of Hermite functions in one dimension:

$$\begin{aligned} H_{\mathbf {k}}(U)=\prod _{\mu =1}^{m}\prod _{\nu =1}^{n}H_{k_{\mu \nu }}(U_{\mu \nu })\quad \text {with }H_{k_{\mu \nu }}(U_{\mu \nu })=\exp \bigl (\pi U_{\mu \nu }^2\bigr )\frac{d^{k_{\mu \nu }}}{dU_{\mu \nu }^{k_{\mu \nu }}} \exp \bigl (-2\pi U_{\mu \nu }^2\bigr ) \end{aligned}$$

Since f is a solution of (3.8), a basis of all functions \({\widetilde{f}}\) is determined by the finite set of Hermite functions \(H_{\mathbf {k}}\) with \(|\mathbf {k}|=\sum _{\mu =1}^{m}\sum _{\nu =1}^{n} k_{\mu \nu }=\alpha n\) (this is Vignéras’ argument, see [16]), where we can rewrite \(H_{\mathbf {k}}(U)=p_{\mathbf {k}}(U)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}AU)\bigr )\) with the Hermite polynomial \(p_{\mathbf {k}}\). So \(f(U)={\widetilde{f}}(U)\exp \bigl (\pi {\text {tr}}(U^{\mathsf {T}}A U)\bigr )\) can be expanded in terms of finitely many orthogonal Hermite polynomials \(p_{\mathbf {k}}\) and thus is a polynomial itself.

Thus, \(g:=\exp \bigl ({\text {tr}}{\Delta }_A/8\pi \bigr )f\) is a polynomial that satisfies \(\mathbf{E }g=\alpha \cdot I\cdot g\) by Lemma 3.7. We can choose any basis \({\mathcal {B}}_{\alpha }^{m,n}\) of \({\mathcal {P}}_{\alpha }^{m,n}\) to describe these homogeneous polynomials. Hence, we also obtain a basis of the solutions of (3.7). As \({\mathcal {P}}_{\alpha }^{m,n}\) is a finite-dimensional vector space, the basis \({\mathcal {B}}_{\alpha }^{m,n}\) is finite. \(\square \)

Now we let A denote an indefinite matrix of signature (rs) again. When we consider the associated system of partial differential equations (3.1), the solutions, which we describe in Proposition 3.12, can be traced to functions that are defined on \(U^\pm \) respectively, where \(U^+\) is the projection of U onto the subspace where A is positive definite and \(U^-\) the projection into the subspace where A is negative definite. So we first consider \({\mathcal {I}}=\bigl (\begin{array}{cc} I_r&{}\mathrm {O}\\ \mathrm {O}&{}-I_s\end{array}\bigr )\) instead of A and the corresponding system of partial differential equations

$$\begin{aligned} {\mathcal {D}}_{{\mathcal {I}}}f= \lambda \cdot I\cdot f\quad (\lambda \in {\mathbb {Z}}), \end{aligned}$$
(3.9)

which can easily be split up into one part that depends on the first r rows of U and another part depending on the last s rows of U. We write \(U_r\) and \(U_s\) for these projections of U. Here, we have \(M=I\) and thus \({\Delta }_M={\Delta }\), and we show that a basis of all solutions is given by the functions

$$\begin{aligned} \exp \bigl ( -{\text {tr}}{\Delta }/8\pi \bigr )\bigl (P(U)\bigr )\exp \bigl (-2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr ), \end{aligned}$$
(3.10)

where P splits as \(P(U)=P_r(U_r)\cdot P_s(U_s)\) with \(P_r\in {\mathcal {B}}_{\alpha }^{m,n}\subset {\mathcal {P}}_{\alpha }^{m,n}\) and \(P_s\in {\mathcal {B}}_{\beta }^{m,n}\subset {\mathcal {P}}_{\beta }^{m,n}\) with \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) such that \(\alpha -\beta =\lambda +s\).

Lemma 3.9

If one applies the Laplacian \({\Delta }_{\mathcal {I}}\) and the Euler operator on a product of functions \(g,h:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\), then the following rules hold:

$$\begin{aligned} {\Delta }_{\mathcal {I}}(g\cdot h)&= g \cdot {\Delta }_{\mathcal {I}}h+h \cdot {\Delta }_{\mathcal {I}}g+\Bigl (\frac{\partial }{\partial U} g\Bigr )^{\mathsf {T}}\cdot {\mathcal {I}}\cdot \Bigl (\frac{\partial }{\partial U} h\Bigr )+\Bigl (\frac{\partial }{\partial U} h\Bigr )^{\mathsf {T}}\cdot {\mathcal {I}}\cdot \Bigl (\frac{\partial }{\partial U} g\Bigr )\\ \mathbf{E }(g\cdot h)&= g\cdot \mathbf{E }h+h\cdot \mathbf{E }g \end{aligned}$$

We omit the proof as the claim follows by a straightforward calculation. The part of (3.10) that depends on the subspace of \({\mathbb {R}}^{m\times n}\), on which the quadratic form is negative definite, satisfies a slightly different system of partial differential equations than the one given in Lemma 3.7, as an additional exponential factor occurs.

Lemma 3.10

Let \({\mathcal {B}}_{\beta }^{m,n}\) denote a basis of \({\mathcal {P}}_{\beta }^{m,n}\). We consider the system of partial differential equations

$$\begin{aligned} {\mathcal {D}}_{-I}f=-(\beta +m)\cdot I\cdot f. \end{aligned}$$
(3.11)

A finite basis of all solutions of (3.11) that additionally satisfy the growth condition \(f(U)\exp (\pi {\text {tr}}(U^{\mathsf {T}}U))\in {\mathcal {S}}({\mathbb {R}}^{m \times n})\) is given by the functions

$$\begin{aligned} f_P(U):=\exp \bigl (-{\text {tr}}{\Delta }/8\pi \bigr )\bigl (P(U)\bigr )\exp \bigl (-2\pi {\text {tr}}(U^{\mathsf {T}}U)\bigr )\quad \text {with}\quad P\in {\mathcal {B}}_{\beta }^{m,n}. \end{aligned}$$

Proof

We define \(g(U):=\exp (-2\pi {\text {tr}}(U^{\mathsf {T}}U))\) and \(h_P(U):=\exp (-{\text {tr}}{\Delta }/8\pi )(P(U))\). Both functions satisfy systems of partial differential equations similar to (3.11): we check that

$$\begin{aligned} \bigl (\mathbf{E }g\bigr )(U)=-4\pi g(U)\cdot U^{\mathsf {T}}U \end{aligned}$$

and

$$\begin{aligned} \bigl ({\Delta }g\bigr )(U)=-4\pi m \cdot I\cdot g(U)+16\pi ^2 g(U) \cdot U^{\mathsf {T}}U \end{aligned}$$

hold. Hence, we have

$$\begin{aligned} \frac{1}{4\pi }{\Delta }g=-\mathbf{E }g-m\cdot I\cdot g. \end{aligned}$$
(3.12)

Due to Proposition 3.8 for \(A=I\), the identity

$$\begin{aligned} \frac{1}{4\pi }{\Delta }h_P=\mathbf{E }h_P -\beta \cdot I\cdot h_P \end{aligned}$$
(3.13)

holds if and only if \(P\in {\mathcal {P}}_{\beta }^{m,n}\). Using the multiplication rules from Lemma 3.9, and applying (3.12) and (3.13) in the calculation of \(\Delta f_{P}=\Delta (g \cdot h_{P})\), we obtain

$$\begin{aligned} \frac{1}{4\pi }{\Delta }f_P&=\frac{1}{4\pi }\biggl (g\cdot {\Delta }h_P+h_P\cdot {\Delta }g +\Bigl (\frac{\partial }{\partial U} g\Bigr )^{\mathsf {T}}\Bigl (\frac{\partial }{\partial U} h_P\Bigr )+\Bigl (\frac{\partial }{\partial U} h_P\Bigr )^{\mathsf {T}}\Bigl (\frac{\partial }{\partial U} g\Bigr )\biggr )\\&=g\cdot \bigl (\mathbf{E }h_P -\beta \cdot I\cdot h_P \bigr )+h_P\cdot \bigl ( -\mathbf{E }g-m\cdot I\cdot g\bigr )-2 g\cdot \mathbf{E }h_P\\&=-(\beta +m)\cdot I\cdot f_P-\mathbf{E }f_P, \end{aligned}$$

where we use in the second step that \(\mathbf{E }^{\mathsf {T}}h_P=\mathbf{E }h_P\) holds, since \(h_P\) satisfies (3.13) and the Laplacian is symmetric.

Analogously, one can show that for any solution f of the system (3.11) of partial differential equations, the function \(h(U)=f(U)\exp \bigl ( 2\pi {\text {tr}}(U^{\mathsf {T}}U)\bigr )\) satisfies \({\mathcal {D}}_I h = \beta \cdot I \cdot h\). Since

$$\begin{aligned}h(U)\exp \bigl ( -\pi {\text {tr}}(U^{\mathsf {T}}U)\bigr )= f(U)\exp (\pi {\text {tr}}(U^{\mathsf {T}}U))\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\end{aligned}$$

by assumption, we can apply Proposition 3.8, which states that we can describe a finite basis for all functions h by \(h_P=\exp (-{\text {tr}}{\Delta }/8\pi )(P(U))\) with \(P\in {\mathcal {B}}_{\beta }^{m,n}\). Thus, the functions \(f_P\) form a finite basis of the solutions of (3.11) that satisfy the aforementioned growth condition. \(\square \)

In the next lemma, we show that the substitution of U by \(S^{-1}U\) leads to the desired system of partial differential equations that is associated with A.

Lemma 3.11

Let \(S\in {\mathbb {R}}^{m\times m}\) such that \(A=(S^{-1})^{\mathsf {T}}{\mathcal {I}}S^{-1}\) and consider the functions \(f,f[S^{-1}]:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\), where \(f[S^{-1}](U)=f(S^{-1}U)\). The function f satisfies (3.9) if and only if \(f[S^{-1}]\) satisfies (3.1).

Proof

Let \(i,j\in \{1,\ldots ,n\}\). It suffices to calculate

$$\begin{aligned} ({\Delta }_A)_{ij}\bigl (f[S^{-1}](U)\bigr )&=\sum \limits _{a,b=1}^m\frac{\partial }{\partial U_{ai}}(A^{-1})_{ab}\frac{\partial }{\partial U_{bj}}\bigl (f(S^{-1}U)\bigr )\\&=\sum \limits _{a,b,\mu ,\nu =1}^m (S^{-1})_{\mu a}(A^{-1})_{ab}(S^{-1})_{\nu b}\frac{\partial ^2f}{\partial U_{\mu i}\partial U_{\nu j}}(S^{-1}U)\\&=\sum \limits _{\mu =1}^m{\mathcal {I}}_{\mu \mu }\frac{\partial ^2f}{\partial U_{\mu i}\partial U_{\mu j}}(S^{-1}U)=\bigl (({\Delta }_{\mathcal {I}})_{ij} f\bigr )(S^{-1}U) \end{aligned}$$

and

$$\begin{aligned} \mathbf{E }_{ij}\bigl (f[S^{-1}](U)\bigr )&=\sum \limits _{d=1}^m U_{di}\frac{\partial }{\partial U_{dj}}\bigl (f(S^{-1}U)\bigr )=\sum \limits _{d,\nu =1}^m U_{di}(S^{-1})_{\nu d}\frac{\partial f}{\partial U_{\nu j}}(S^{-1}U)\\&=\sum \limits _{\nu =1}^m (S^{-1}U)_{\nu i}\frac{\partial f}{\partial U_{\nu j}}(S^{-1}U)=(\mathbf{E }_{ij}f)(S^{-1}U) \end{aligned}$$

to deduce the claim. \(\square \)

Proposition 3.12

Let \({\mathcal {B}}_{\alpha }^{m,n}\) denote a basis of \({\mathcal {P}}_{\alpha }^{m,n}\) and let \(A\in {\mathbb {Z}}^{m\times m}\) denote a non-degenerate symmetric matrix of signature (rs). As in Remark 2.3, we write A as the sum of a positive semi-definite matrix \(A^{+}\) and a negative semi-definite matrix \(A^{-}\) and define \(M:=A^{+}-A^{-}\). The functions

$$\begin{aligned} f(U)=f_{\alpha ,\beta }(U)= \exp \bigl ( -{\text {tr}}{\Delta }_M/8\pi \bigr )\bigl (P(U)\bigr )\exp \bigl ( 2\pi {\text {tr}}(U^{\mathsf {T}}A^{-}U)\bigr ), \end{aligned}$$

where \(P\in {\mathcal {P}}_{\alpha +\beta }^{m,n}\) is given as the product \(P(U)=P_r(U^+)\cdot P_s(U^-)\) with \(P_r\in {\mathcal {B}}_{\alpha }^{m,n}\subset {\mathcal {P}}_{\alpha }^{m,n}\) and \(P_s\in {\mathcal {B}}_{\beta }^{m,n}\subset {\mathcal {P}}_{\beta }^{m,n}\) for \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) such that \(\alpha -\beta =\lambda +s\), form a (possibly infinite) basis for the space of solutions \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) of (3.1) that additionally satisfy the growth condition

$$\begin{aligned} f(U)\exp \bigl ( -\pi {\text {tr}}(U^{\mathsf {T}}A U) \bigr )\in {\mathcal {S}}({\mathbb {R}}^{m\times n}). \end{aligned}$$

Proof

We consider the case \(A={\mathcal {I}}\). First we take \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) with \(\alpha -\beta =\lambda +s\) to be fixed and show that \(f=f_{\alpha ,\beta }\) solves (3.9). As the eigenvectors of A form the canonical basis of \({\mathbb {R}}^m\), the polynomial P splits as \(P(U)=P_r(U_r)\cdot P_s(U_s)\), where \(U_r\in {\mathbb {R}}^{r\times n}\) consists of the first r rows of U and \(U_s\in {\mathbb {R}}^{s\times n}\) of the last s rows of U. The exponential part of f has the form \(\exp \bigl (-2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr ),\) so we can write \(f:=f_r\cdot f_s\), where \(f_r\) denotes the part dependent on \(U_r\) and \(f_s\) the part dependent on \(U_s\). By Lemma 3.9, we have

$$\begin{aligned} \mathbf{E }f=f_r\cdot \mathbf{E }f_s+f_s\cdot \mathbf{E }f_r. \end{aligned}$$

The expression

$$\begin{aligned} {\Delta }_{\mathcal {I}}f= & {} {\Delta }_{\mathcal {I}}(f_r\cdot f_s)=f_r\cdot {\Delta }_{\mathcal {I}}f_s+f_s\cdot {\Delta }_{\mathcal {I}}f_r\\&+\Bigl (\frac{\partial }{\partial U} f_r\Bigr )^{\mathsf {T}}\cdot {\mathcal {I}}\cdot \Bigl (\frac{\partial }{\partial U} f_s\Bigr )+\Bigl (\frac{\partial }{\partial U} f_r\Bigr )^{\mathsf {T}}\cdot {\mathcal {I}}\cdot \Bigl (\frac{\partial }{\partial U} f_s\Bigr ) \end{aligned}$$

simplifies to

$$\begin{aligned} {\Delta }_{\mathcal {I}}f=f_r\cdot {\Delta }_{\mathcal {I}}f_s+f_s\cdot {\Delta }_{\mathcal {I}}f_r, \end{aligned}$$

since

$$\begin{aligned} \frac{\partial }{\partial U} f_r=\begin{pmatrix} \frac{\partial }{\partial U_r}f_r\\ \mathrm {O} \end{pmatrix}\quad \text {and}\quad \frac{\partial }{\partial U} f_s=\begin{pmatrix} \mathrm {O}\\ \frac{\partial }{\partial U_s}f_s \end{pmatrix}. \end{aligned}$$

These relations also show that we can write \({\Delta }_{\mathcal {I}}f_r={\Delta }_{I_r}f_r\) and \({\Delta }_{\mathcal {I}}f_s=-{\Delta }_{I_s}f_s\). Then we consider the system of partial differential equations depending on the first r rows of U, where \(f_r\) corresponds to the function f from Lemma 3.7. Independent from that, consider the part depending on the last s rows of U and apply Lemma 3.10 for \(f_s\). Putting these results together, we obtain

$$\begin{aligned} {\Delta }_{\mathcal {I}}f&=4\pi \cdot \Bigl ( f_r\cdot \bigl ( \mathbf{E }f_s+(\beta +s)\cdot I\cdot f_s\bigr )+f_s\cdot \bigl ( \mathbf{E }f_r-\alpha \cdot I\cdot f_r\bigr )\Bigr )\\&=4\pi \cdot \bigl ( \mathbf{E }(f_r\cdot f_s)+(-\alpha +\beta +s)\cdot I\cdot (f_r\cdot f_s)\bigr )\\&=4\pi \cdot \bigl ( \mathbf{E }f - (\alpha -\beta -s)\cdot I \cdot f\bigr ), \end{aligned}$$

where \(\alpha -\beta -s=\lambda \).

To show that these functions form a basis of all solutions, we employ a similar argument as in the proof of Proposition 3.8. Again, we use Vignéras’ result to show that the solutions f of (3.9) have a certain form: We define the function \({\widetilde{f}}(U):=f(U)\exp \bigl ( -\pi {\text {tr}}(U^{\mathsf {T}}{\mathcal {I}}U) \bigr )\), which is a Schwartz function by assumption. Furthermore, identify \({\mathbb {R}}^{m\times n}\) with \({\mathbb {R}}^{m n}\) by writing \(U\in {\mathbb {R}}^{m\times n}\) as a column vector in \({\mathbb {R}}^{mn}\). As we have

$$\begin{aligned} {\text {tr}}(U^{\mathsf {T}}{\mathcal {I}}U)=\sum _{\nu =1}^n\Bigl (\sum _{\mu =1}^{r} U_{\mu \nu }^2-\sum _{\mu =r+1}^{r+s} U_{\mu \nu }^2\Bigr ), \end{aligned}$$

which equals the normalized quadratic form of signature (rnsn) on \({\mathbb {R}}^{mn}\), we write \({\widetilde{f}}\) as

$$\begin{aligned} {\widetilde{f}}(U)=f(U)\exp \biggl (-\pi \sum _{\nu =1}^n\Bigl (\sum _{\mu =1}^{r} U_{\mu \nu }^2-\sum _{\mu =r+1}^{r+s} U_{\mu \nu }^2\Bigr )\biggr ). \end{aligned}$$

As an \({\mathcal {L}}^2({\mathbb {R}}^{m n})\)-function, \({\widetilde{f}}\) is given in an orthogonal basis of Hermite functions \(H_{\mathbf {k}}\) in mn variables in the form of \({\widetilde{f}}=\sum _{\mathbf {k}\in {\mathbb {N}}_0^{mn}} c_{\mathbf {k}}H_{\mathbf {k}}\) with \(c_{\mathbf {k}}\in {\mathbb {R}}\). Since f is a solution of

$$\begin{aligned} {\text {tr}}({\mathcal {D}}_{{\mathcal {I}}})f=\lambda n\cdot f\quad (\lambda \in {\mathbb {Z}}), \end{aligned}$$
(3.14)

we restrict the possible basis elements that appear in the expansion of \({\widetilde{f}}\):

$$\begin{aligned} {\widetilde{f}}=\sum \limits _{\begin{array}{c} \mathbf {k}\in {\mathbb {N}}_0^{mn}\\ \varepsilon (\mathbf {k})=n(\lambda +s) \end{array}} c_{\mathbf {k}}H_{\mathbf {k}},\quad \text {where}\quad \varepsilon (\mathbf {k}):=\sum _{\nu =1}^n\Bigl (\sum _{\mu =1}^{r} k_{\mu \nu }-\sum _{\mu =r+1}^{r+s} k_{\mu \nu }\Bigr ) \end{aligned}$$

Thus, as a consequence of Vignéras’ result for genus 1, any solution of (3.14) is given as a (possibly infinite) linear combination of functions

$$\begin{aligned} f_{\mathbf {k}}(U):=H_{\mathbf {k}}(U)\exp \biggl (\pi \sum _{\nu =1}^n\Bigl (\sum _{\mu =1}^{r} U_{\mu \nu }^2-\sum _{\mu =r+1}^{r+s} U_{\mu \nu }^2\Bigr )\biggr ), \end{aligned}$$

where the Hermite functions on \({\mathbb {R}}^{m n}\) (respectively \({\mathbb {R}}^{m\times n}\)) are given as product of one-dimensional Hermite functions:

$$\begin{aligned} H_{\mathbf {k}}(U)=\prod _{\mu =1}^{m}\prod _{\nu =1}^{n}H_{k_{\mu \nu }}(U_{\mu \nu })=p(U_r)q(U_s)\exp \biggl ( -\pi \sum _{\mu =1}^{m}\sum _{\nu =1}^n U_{\mu \nu }^2\biggr ) \end{aligned}$$

with polynomials pq, which are defined on \(U_r, U_s\) respectively. Rewriting \(f_{\mathbf {k}}\) as

$$\begin{aligned} \begin{aligned} f_{\mathbf {k}}(U)&=p(U_r)q(U_s)\exp \Bigl ( -2\pi \sum _{\mu =r+1}^{r+s}\sum _{\nu =1}^n U_{\mu \nu }^2\Bigr )\\&=p(U_r)q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr ), \end{aligned} \end{aligned}$$
(3.15)

each solution of (3.9) is given as a linear combination of functions of the form (3.15). The system of partial differential equations (3.9) is separable, i. e. can be broken into the part that depends on \(U_r\) and the part that depends on \(U_s\). Likewise, \(f_{\mathbf {k}}\) is given by a polynomial factor p depending on \(U_r\) and a factor of the form \(q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr )\), where q also denotes a polynomial. We can write \({\mathcal {D}}_{{\mathcal {I}}}={\mathcal {D}}^r+{\mathcal {D}}^s\) such that the differential operator \({\mathcal {D}}^r\) vanishes if we apply it to a function on \(U_s\), and has the form \({\mathcal {D}}_{I_r}\) when applying it to a function that is defined on \(U_r\). Analogously, \({\mathcal {D}}^s\) only depends on \(U_s\) and is of the form \({\mathcal {D}}_{-I_s}\) when applying it to functions on \(U_s\). So we have \({\mathcal {D}}_{{\mathcal {I}}} f_{\mathbf {k}}=\lambda \cdot I\cdot f_{\mathbf {k}}\) with

$$\begin{aligned} ({\mathcal {D}}_{{\mathcal {I}}} f_{\mathbf {k}})(U)= & {} q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr ){\mathcal {D}}^r\bigl (p(U_r)\bigr )\\&+p(U_r){\mathcal {D}}^s\bigl (q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr )\bigr ). \end{aligned}$$

For \(f_{\mathbf {k}}(U)\ne 0\) we divide by \(f_{\mathbf {k}}\) and obtain for each entry of the system of partial differential equations a sum of two partial differential equations that depend on different variables and therefore have to admit constant solutions. It follows that a function \(f_{\mathbf {k}}\) solving (3.9) is given as the product described in (3.15) with the additional restriction that

$$\begin{aligned} \frac{{\mathcal {D}}^r\bigl (p(U_r)\bigr )}{p(U_r)}= & {} C_r\quad \text {and}\quad \frac{{\mathcal {D}}^s\bigl (q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr )\bigr )}{q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr )}=C_s\\&\text {with }C_r,C_s\in {\mathbb {R}}^{n\times n}\text { and } C_r+C_s=\lambda \cdot I. \end{aligned}$$

We show that \(C_r=\alpha \cdot I\) for some \(\alpha \in {\mathbb {N}}_0\) holds and thus \(C_s=(\lambda -\alpha )\cdot I\). By applying the operator \(\exp \bigl ({\text {tr}}{\Delta }/8\pi \bigr )\) to \(p(U_r)\), we can deduce analogously to the proof of Lemma 3.7 that \(p(U_r)\) satisfies

$$\begin{aligned} {\mathcal {D}}^r\bigl (p(U_r)\bigr )=C_r\cdot p(U_r) \end{aligned}$$

if and only if the polynomial \(P_r(U_r):=\exp \bigl ({\text {tr}}{\Delta }/8\pi \bigr )\bigl (p(U_r)\bigr )\) satisfies \(\mathbf{E }P_r=C_r\cdot P_r\). We have shown in Lemma 3.5 that this system of partial differential equations admits polynomial solutions only if \(C_r=\alpha \cdot I\) with \(\alpha \in {\mathbb {N}}_0\).

Thus, every solution f of (3.9) is described by basis elements \(f_{\mathbf {k}}\) that consist of two factors that depend on different variables: p solves the system of partial differential equations in Proposition 3.8, where we have shown that these solutions can be described by a basis of homogeneous polynomials of degree \(\alpha \). Similarly, the function \(q(U_s)\exp \bigl ( -2\pi {\text {tr}}(U_s^{\mathsf {T}}U_s)\bigr )\) solves the system of equations in Lemma 3.10, where we also described a basis of solutions using homogeneous polynomials of degree \(\beta \). We conclude that all solutions of (3.9) are described by the functions \(f_{\alpha ,\beta }\) defined above with \(\alpha ,\beta \in {\mathbb {N}}_0\) such that \(\alpha -\beta =\lambda +s\). Thus, the basis consists of infinitely many elements if \(r,s>0\) (i. e. when A is indefinite) and finitely many otherwise (i. e. when A is positive or negative definite).

We substitute \(U\mapsto S^{-1}U\) and apply Lemma 3.11 to obtain the result for the system of partial differential equations (3.1). Note that for every basis element \(f_{\alpha ,\beta }\) the polynomial P splits as \(P(U)=P_r(U^+)\cdot P_s(U^-)\) by assumption. \(\square \)

4 Construction of theta series with modular transformation behavior

In this section, we construct Siegel theta series, which transform like modular forms of weight \(m/2+\lambda \), arising from the functions that we considered in the last section as solutions of \({\mathcal {D}}_A f=\lambda \cdot I \cdot f\). We explicitly determine the transformation behavior of the theta series with respect to \(Z\mapsto Z+S\) (for a symmetric matrix \(S\in {{\mathbb {Z}}}^{{n}\times {n}}\)) and \(Z\mapsto -Z^{-1}\). To state the next lemma, in which we describe the transformation behavior of \(\vartheta _{{H},{K}}\) with respect to the first-mentioned transformation, we introduce the following notation for matrices:

Definition 4.1

 

  1. (a)

    For \(M\in {{\mathbb {Z}}}^{{\mu }\times {\mu }}\) we define \(M_0\in {{\mathbb {Z}}}^{{\mu }\times {\mu }}\) by \((M_0)_{ij}=M_{ii}\) for \(i=j\) and zero otherwise.

  2. (b)

    We write \(\mathrm {1}_{\mu \nu }\) for a matrix with \(\mu \) rows and \(\nu \) columns, whose entries are all equal to 1.

Lemma 4.2

Let \(S\in {{\mathbb {Z}}}^{{n}\times {n}}\) denote a symmetric matrix. With respect to \(Z\mapsto Z+S\), the theta series from Definition 1.3 transforms as follows:

$$\begin{aligned} \vartheta _{{H},{K}}(Z+S)={\text {e}}\bigl (- {\text {tr}}(H^{\mathsf {T}}AHS)/2- {\text {tr}}(S_0\mathrm {1}_{nm}A_0H)/2\bigr )\vartheta _{{H},{{\widetilde{K}}}}(Z) \end{aligned}$$

with

$$\begin{aligned} {\widetilde{K}}:=K+HS+\frac{1}{2} A^{-1}A_0\mathrm {1}_{mn}S_0 \end{aligned}$$

Proof

Write \(U=H+R\) with \(R\in {\mathbb {Z}}^{m\times n}\) such that

$$\begin{aligned} {\text {tr}}(U^{\mathsf {T}}AUS)={\text {tr}}\bigl (H^{\mathsf {T}}AHS\bigr )+2{\text {tr}}\bigl ((HS)^{\mathsf {T}}AR\bigr )+{\text {tr}}\bigl (R^{\mathsf {T}}ARS\bigr ). \end{aligned}$$

It is straightforward to see that

$$\begin{aligned} {\text {tr}}\bigl (H^{\mathsf {T}}AHS\bigr )+2{\text {tr}}\bigl ((HS)^{\mathsf {T}}AR\bigr )=-{\text {tr}}\bigl (H^{\mathsf {T}}AHS\bigr )+2{\text {tr}}\bigl ((HS)^{\mathsf {T}}AU\bigr ). \end{aligned}$$

As A and S both denote symmetric matrices and \(x^2\equiv x\ ({\text {mod}} 2)\) for any \(x\in {\mathbb {Z}}\), we have

$$\begin{aligned} {\text {tr}}\bigl (R^{\mathsf {T}}ARS\bigr )\equiv \sum \limits _{\nu =1}^n\sum \limits _{\mu =1}^m R_{\mu \nu }A_{\mu \mu }S_{\nu \nu }\ ({\text {mod}} 2). \end{aligned}$$

To rewrite the expression on the right-hand side in terms of matrices, we introduce the matrix \(\mathrm {1}_{nm}\in {\mathbb {Z}}^{n\times m}\) that only contains 1’s as entries and obtain

$$\begin{aligned} {\text {e}}\bigl ({\text {tr}}(R^{\mathsf {T}}ARS)/2\bigr )&={\text {e}}\bigl ({\text {tr}}(S_0\mathrm {1}_{nm}A_0R)/2\bigr )\\&={\text {e}}\Bigl ({\text {tr}}\bigl (( A^{-1}A_0\mathrm {1}_{mn}S_0)^{\mathsf {T}}AU\bigr )/2- {\text {tr}}\bigl (S_0\mathrm {1}_{nm}A_0H\bigr )/2\Bigr ). \end{aligned}$$

\(\square \)

In the following two sections, we determine how the theta series behaves under \(Z\mapsto -Z^{-1}\). To put it briefly, we calculate the Fourier transform of the summand and then apply the Poisson summation formula. We define the Fourier transform associated with the matrix A:

Definition 4.3

Let \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {C}}\) such that \(f\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\). Then \({\widehat{f}}\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\) denotes the Fourier transform

$$\begin{aligned} {\widehat{f}}(V):=\int \limits _{{\mathbb {R}}^{m\times n}}f(U){\text {e}}\bigl ({\text {tr}}( V^{\mathsf {T}}A U)\bigr )dU \end{aligned}$$

with dU the Euclidean volume element.

Note that we do not take the standard definition of the Fourier transform as a unitary operator here, but rather we obtain the additional normalizing factor \(|\det A|^{-n/2}\). Consequently, the Poisson summation formula has the form

$$\begin{aligned} \sum \limits _{U\in {\mathbb {Z}}^{m\times n}} f(U)=\sum \limits _{V\in A^{-1}{\mathbb {Z}}^{m\times n}} {\widehat{f}}(V). \end{aligned}$$
(4.1)

In Sect. 4.1, we consider theta series associated with positive definite quadratic forms and give a set of examples of non-holomorphic Siegel modular forms. We obtain those results by generalizing the set-up of Freitag [6]. In Sect. 4.2, we see that a similar construction also yields theta series associated with indefinite quadratic forms that transform like Siegel modular forms.

4.1 Theta series for positive definite quadratic forms

In this section, \(p:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) is a polynomial and \(A\in {{\mathbb {Z}}}^{{m}\times {m}}\) is a symmetric positive definite matrix. Following Freitag [6], we first examine the series

$$\begin{aligned} \sum _{U\in {\mathbb {Z}}^{m\times n}} p(U){\text {e}}\bigl ({\text {tr}}(U^{\mathsf {T}}A UZ)/2\bigr )\quad (Z\in {\mathbb {H}}_n). \end{aligned}$$
(4.2)

We consider the operator

$$\begin{aligned} {\text {tr}}{\Delta }_A =\sum \limits _{\nu =1}^n \sum \limits _{\mu ,\rho =1}^m \frac{\partial }{\partial U_{\mu \nu }}(A^{-1})_{\mu \rho }\frac{\partial }{\partial U_{\rho \nu }} \end{aligned}$$

and define

$$\begin{aligned} \exp (c{\text {tr}}{\Delta }_A ) \bigl (p(U)\bigr ):=\sum \limits _{k=0}^\infty \frac{c^k}{k !}({\text {tr}}{\Delta }_A)^k \bigl (p(U)\bigr )\quad (c\in {\mathbb {C}}). \end{aligned}$$

Since we are assuming that p is a polynomial, this sum is finite.

Lemma 4.4

The following rules hold for \(a,b,c\in {\mathbb {C}}\) and \(M\in {{\mathbb {C}}}^{{m}\times {m}},N\in {{\mathbb {C}}}^{{n}\times {n}}\):

$$\begin{aligned} \exp (a{\text {tr}}{\Delta }_A)\bigl (\exp (b{\text {tr}}{\Delta }_A)(p(U))\bigr )&=\exp \bigl ((a+b){\text {tr}}{\Delta }_A\bigr )\bigl (p(U)\bigr ) \end{aligned}$$
(4.3)
$$\begin{aligned} \exp (c{\text {tr}}{\Delta }_A)\bigl (p(aU)\bigr )&=\bigl (\exp (a^2c{\text {tr}}{\Delta }_A)p\bigr )(aU) \end{aligned}$$
(4.4)
$$\begin{aligned} \exp (c{\text {tr}}{\Delta }_A)\bigl (p(UN)\bigr )&=\bigl (\exp \bigl (c{\text {tr}}(N{\Delta }_AN^{\mathsf {T}})\bigr )p\bigr )(UN) \end{aligned}$$
(4.5)
$$\begin{aligned} \exp (c{\text {tr}}{\Delta }_A)\bigl (p(MU)\bigr )&=\biggl (\exp \Bigl (c{\text {tr}}\Bigl ( \bigl (\frac{\partial }{\partial U}\bigr )^{\mathsf {T}}MA^{-1}M^{\mathsf {T}}\frac{\partial }{\partial U}\Bigr )\Bigr ) p\biggr )(MU) \end{aligned}$$
(4.6)

Proof

We derive Property (4.3) by considering the Cauchy product for the absolutely convergent series \(\sum _{k=0}^\infty \frac{1}{k !}(a{\text {tr}}{\Delta }_A)^k\) and \(\sum _{k=0}^\infty \frac{1}{k !}(b{\text {tr}}{\Delta }_A)^k.\) The identity (4.4) follows immediately from (4.5), when we set \(N:=a\cdot I \in {{\mathbb {C}}}^{{n}\times {n}}\). To show (4.5) we consider p(UN) and apply the Laplacian. We have

$$\begin{aligned} \frac{\partial }{\partial U_{\mu \nu }}\bigl (p(UN)\bigr )=\sum \limits _{\ell =1}^m \sum \limits _{i=1}^n \frac{\partial p}{\partial U_{\ell i}} (UN)\frac{\partial (UN)_{\ell i }}{\partial U_{\mu \nu }}=\sum \limits _{i=1}^n N_{\nu i} \frac{\partial p}{\partial U_{\mu i}} (UN), \end{aligned}$$

since

$$\begin{aligned} \frac{\partial (UN)_{\ell i}}{\partial U_{\mu \nu }}={\left\{ \begin{array}{ll} N_{\nu i}&{}\text { if }\ell =\mu ,\\ 0&{}\text { otherwise}. \end{array}\right. } \end{aligned}$$

With the same argument we obtain

$$\begin{aligned} \frac{\partial ^2}{\partial U_{\mu \nu }\partial U_{\rho \nu }}\bigl (p(UN)\bigr )=\sum \limits _{d,e=1}^n N_{\nu d}N_{\nu e}\frac{\partial ^2p}{\partial U_{\rho d}\partial U_{\mu e}}(UN) \end{aligned}$$

and therefore

$$\begin{aligned} {\text {tr}}{\Delta }_A \bigl (p(UN)\bigr )&=\sum \limits _{\nu =1}^n \sum \limits _{\mu ,\rho =1}^m \frac{\partial }{\partial U_{\mu \nu }}(A^{-1})_{\mu \rho }\frac{\partial }{\partial U_{\rho \nu }}\bigl (p(UN)\bigr )\\&=\sum \limits _{\mu ,\rho =1}^m\sum \limits _{\nu ,d,e=1}^n N_{\nu d}N_{\nu e} (A^{-1})_{\mu \rho } \frac{\partial ^2p}{\partial U_{\rho d}\partial U_{\mu e}}(UN)\\&=\biggl (\sum \limits _{\nu =1}^n\sum \limits _{\mu ,\rho =1}^m \Bigl (N\bigl (\frac{\partial }{\partial U}\bigr )^{\mathsf {T}}\Bigr )_{\nu \mu } (A^{-1})_{\mu \rho }\Bigl (\frac{\partial }{\partial U}N^{\mathsf {T}}\Bigr )_{\rho \nu }p\biggr )(UN)\\&=\bigl ({\text {tr}}(N{\Delta }_AN^{\mathsf {T}})p\bigr )(UN). \end{aligned}$$

Rewriting the Laplacian in the sum then gives (4.5). Analogously we obtain (4.6). \(\square \)

We calculate the Fourier transform of the summands in the series (4.2). To shorten the calculation, we apply the following result by Freitag [6, p. 158f.], who considers Gauss transforms: we have

$$\begin{aligned} \int \limits _{{\mathbb {R}}^{m\times n}}p(U+V)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}U)\bigr )dU=\exp \bigl ( {\text {tr}}{\Delta }/4\pi \bigr ) \bigl (p(V)\bigr ). \end{aligned}$$
(4.7)

Note that Freitag uses the normalized Laplace operator \({\text {tr}}{\Delta }={\text {tr}}{\Delta }_I\). In the next lemma, we see that for arbitrary polynomials p, the functions \(p(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr )\) are not necessarily eigenfunctions with regard to the Fourier transform:

Lemma 4.5

Let \(f:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {C}},f(U):=p(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr ).\) The Fourier transform of f is

$$\begin{aligned} {\widehat{f}}(V)= & {} \det A^{-n/2}\det (-iZ)^{-m/2}{\text {e}}\bigl (- {\text {tr}}(V^{\mathsf {T}}A VZ^{-1})/2\bigr )\\&\cdot \Bigl (\exp \bigl (i{\text {tr}}( {\Delta }_A Z^{-1})/4\pi \bigr )p\Bigr )(-VZ^{-1}). \end{aligned}$$

Proof

We rewrite Freitag’s result (4.7) to obtain a form that is suitable for the calculation of the Fourier transform: We substitute U by \(U+iV\), and as we examine a holomorphic integrand in several complex variables, we apply the global residue theorem (i. e. instead of integrating over U one can integrate over \(U+iV\) without changing the integral) and obtain

$$\begin{aligned} \begin{aligned} \int \limits _{{\mathbb {R}}^{m\times n}}p(U)&\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}U)+2\pi i {\text {tr}}(V^{\mathsf {T}}U)\bigr )dU\\&=\exp \bigl ( -\pi {\text {tr}}(V^{\mathsf {T}}V)\bigr )\int \limits _{{\mathbb {R}}^{m\times n}}p(U+iV)\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}U)\bigr )dU\\&=\exp \bigl ( -\pi {\text {tr}}(V^{\mathsf {T}}V)\bigr )\bigl (\exp ({\text {tr}}{\Delta }/4\pi )p\bigr ) (iV). \end{aligned} \end{aligned}$$
(4.8)

To determine

$$\begin{aligned} {\widehat{f}}(V)=\int \limits _{{\mathbb {R}}^{m\times n}}p(U){\text {e}}\bigl ({\text {tr}}(U^{\mathsf {T}}AUZ)/2+ {\text {tr}}(V^{\mathsf {T}}A U)\bigr )dU, \end{aligned}$$

we set \(Z=iY\) and substitute U by \(A^{-1/2}UY^{-1/2}\) (A and Y are positive definite symmetric matrices, so the same holds for the inverses and uniquely determined square roots):

$$\begin{aligned} \int \limits _{{\mathbb {R}}^{m\times n}}p(U)&\exp \bigl (-\pi {\text {tr}}(U^{\mathsf {T}}AUY)+2\pi i{\text {tr}}(V^{\mathsf {T}}AU)\bigr )dU\\&=\det A^{-n/2}\det Y^{-m/2} \int \limits _{{\mathbb {R}}^{m\times n}}p(A^{-1/2}UY^{-1/2}) \\&\quad \cdot \exp \Bigl (-\pi {\text {tr}}(U^{\mathsf {T}}U)+2\pi i{\text {tr}}\bigl (( A^{1/2}VY^{-1/2})^{\mathsf {T}}U\bigr )\Bigr )dU \end{aligned}$$

This is (4.8) evaluated at \(A^{1/2}VY^{-1/2}\) with a slightly changed argument in the polynomial p. We apply (4.5) and (4.6) and use that Y is symmetric to write \({\widehat{f}}\) as

$$\begin{aligned} \det A^{-n/2}\det Y^{-m/2} \exp \bigl (-\pi {\text {tr}}(V^{\mathsf {T}}A VY^{-1})\bigr ) \Bigl (\exp \bigl ({\text {tr}}({\Delta }_AY^{-1})/4\pi \bigr )p\Bigr )(iVY^{-1}). \end{aligned}$$

As the integrand is a holomorphic function, we resubstitute \(Y=-iZ\) (for the inverse we have \(Y^{-1}=iZ^{-1}\)) and deduce the claim by analytic continuation. \(\square \)

In order to obtain an eigenfunction under the Fourier transformation, Freitag [6] chooses p to be a harmonic polynomial, i. e. \(({\text {tr}}{\Delta }) p=0\) and \(p(UN)=\det N^{\alpha } p(U)\) holds for all \(N\in {{\mathbb {C}}}^{{n}\times {n}}\). We consider the more general class of polynomials

$$\begin{aligned} p_Z(U):=\exp \bigl (-{\text {tr}}({\Delta }_AY^{-1})/8\pi \bigr )\bigl (P(U)\bigr )\quad \text {with}\quad P\in {\mathcal {P}}_{\alpha }^{m,n}. \end{aligned}$$
(4.9)

We described the vector space \(P\in {\mathcal {P}}_{\alpha }^{m,n}\) in Section 3.1, where we have also seen that the functions \(p(U):=\exp (-{\text {tr}}({\Delta }_A)/8\pi )\bigl (P(U)\bigr )\) with \(P\in {\mathcal {P}}_{\alpha }^{m,n}\) form a basis for the vector space of the solutions of \({\mathcal {D}}_A f=\alpha \cdot I \cdot f\). The slightly modified functions in (4.9) depend on the imaginary part Y of Z, which means that we lose holomorphicity in the construction of the theta series. However, for harmonic polynomials P, we obtain the holomorphic theta series considered by Freitag. Note that this is basically a generalization of Borcherds’ construction for \(n=1\) in [4], see Remark 2.4 for a more detailed explanation.

Lemma 4.6

Let \(p_Z\) denote a polynomial from (4.9) and define

$$\begin{aligned} f_Z(U):=p_Z(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr ). \end{aligned}$$

The Fourier transform of \(f_Z\) is

$$\begin{aligned} \widehat{f_Z}(V)=i^{-mn/2}\det A^{-n/2} \det (-Z^{-1})^ {m/2+\alpha }f_{-Z^{-1}}(V). \end{aligned}$$

Proof

We apply Lemma 4.5 and then use the linearity of the trace and Property (4.3):

$$\begin{aligned} \begin{aligned} \int \limits _{{\mathbb {R}}^{m\times n}}p_Z(U)&{\text {e}}\bigl ({\text {tr}}(U^{\mathsf {T}}AUZ)/2+ {\text {tr}}(V^{\mathsf {T}}A U)\bigr )dU\\&= \det A^{-n/2}\det (-iZ)^{-m/2} {\text {e}}\bigl (- {\text {tr}}(V^{\mathsf {T}}A VZ^{-1})/2\bigr )\\&\quad \cdot \left( \exp \bigl ( i{\text {tr}}\bigl ({\Delta }_AZ^{-1}\bigr )/4\pi -{\text {tr}}( {\Delta }_AY^{-1})/8\pi \bigr )P\right) (-VZ^{-1})\\&= \det A^{-n/2}\det (-iZ)^{-m/2} {\text {e}}\bigl (- {\text {tr}}(V^{\mathsf {T}}A VZ^{-1})/2\bigr )\\&\quad \cdot \left( \exp \bigl (-{\text {tr}}( {\Delta }_A(Y^{-1}-2iZ^{-1}))/8\pi \bigr )P\right) (-VZ^{-1}) \end{aligned} \end{aligned}$$
(4.10)

If \({\widetilde{Y}}\) denotes the imaginary part of \(-Z^{-1}\), the identity \({\widetilde{Y}}={\overline{Z}}^{-1} Y Z^{-1}\) holds by (2.1). Hence,

$$\begin{aligned} Y^{-1}-2iZ^{-1}=Y^{-1}(Z-2iY)Z^{-1}=Y^{-1}{\overline{Z}} Z^{-1}=Z^{-1}(ZY^{-1}{\overline{Z}})Z^{-1}=Z^{-1}{\widetilde{Y}}^{-1} Z^{-1}. \end{aligned}$$

The matrix Z is symmetric and therefore also its inverse \(Z^{-1}\), which means that we can rewrite (4.10) as follows:

$$\begin{aligned}&\det A^{-n/2}\det (-iZ)^{-m/2} {\text {e}}\bigl (- {\text {tr}}(V^{\mathsf {T}}A VZ^{-1})/2\bigr )\\&\quad \cdot \left( \exp \Bigl ( -{\text {tr}}\bigl ((-Z^{-1}){\Delta }_A(-Z^{-1})^{\mathsf {T}}{\widetilde{Y}}^{-1}\bigr )/8\pi \Bigr )P\right) (-VZ^{-1}) \end{aligned}$$

Using Property (4.5) and the homogeneity of \(P\in {\mathcal {P}}_{\alpha }^{m,n}\), we conclude that the Fourier transform of \(f_Z\) has the form

$$\begin{aligned} \widehat{f_Z}(V)&= \det A^{-n/2}\det (-iZ)^{-m/2}\det (-Z^{-1})^{\alpha }{\text {e}}\bigl (- {\text {tr}}(V^{\mathsf {T}}A VZ^{-1})/2\bigr )\\&\quad \cdot \Bigl (\exp \bigl (-{\text {tr}}({\Delta }_A{\widetilde{Y}}^{-1})/8\pi \bigr )P\Bigr )(V) \\&= \det A^{-n/2}\det (-iZ)^{-m/2}\det (-Z^{-1})^{\alpha }f_{-Z^{-1}}(V). \end{aligned}$$

Separating constant factors and factors depending on the determinants of A and Z, we deduce the claim. \(\square \)

This construction yields theta series that transform like Siegel modular forms:

Proposition 4.7

Let \(A\in {\mathbb {Z}}^{m\times m}\) denote a positive definite symmetric matrix and p the polynomial defined as \(p(U)=\exp \bigl (-{\text {tr}}{\Delta }_A/8\pi \bigr )\bigl (P(U)\bigr )\) with \(P\in {\mathcal {P}}_{\alpha }^{m,n}\). For the corresponding theta series \(\vartheta _{{H},{K}}\) given in Definition 1.3 we have

$$\begin{aligned} \vartheta _{{H},{K}}(-Z^{-1})= & {} i^{-mn/2}\det A^{-n/2}\det Z^{m/2+\alpha }{\text {e}}\bigl ( {\text {tr}}(H^{\mathsf {T}}A K)\bigr )\\&\cdot \sum \limits _{J\in A^{-1}{\mathbb {Z}}^{m\times n} {\text {mod}} {\mathbb {Z}}^{m\times n} } \vartheta _{{J+K},{-H}}(Z). \end{aligned}$$

Proof

We recall the definition of \(\vartheta _{{H},{K}}\), which is

$$\begin{aligned} \vartheta _{{H},{K}}(Z)=\det Y^{-\alpha /2}\sum \limits _{U\in H+{\mathbb {Z}}^{m\times n}}p(UY^{1/2}){\text {e}}\bigl ( {\text {tr}}(U^{\mathsf {T}}AUZ)/2+ {\text {tr}}(K^{\mathsf {T}}AU)\bigr ). \end{aligned}$$

We use Property (4.5) and the homogeneity property of P to rewrite

$$\begin{aligned} \det Y^{-\alpha /2}p(UY^{1/2})=\exp \bigl (-{\text {tr}}({\Delta }_A Y^{-1})/8\pi \bigr )\bigl (P(U)\bigr )=p_Z(U), \end{aligned}$$

and analogously \(p_{-Z^{-1}}(U)=\det {\widetilde{Y}}^{-\alpha /2}p(U{\widetilde{Y}}^{1/2})\). That means the theta series has the form

$$\begin{aligned} \vartheta _{{H},{K}}(-Z^{-1})= & {} \sum \limits _{U\in {\mathbb {Z}}^{m\times n}}\Big \lbrace p_{-Z^{-1}}(U+H)\\&\cdot {\text {e}}\bigl (- {\text {tr}}((U+H)^{\mathsf {T}}A(U+H)Z^{-1})/2+ {\text {tr}}(K^{\mathsf {T}}A(U+H))\bigr )\Big \rbrace . \end{aligned}$$

By Lemma 4.6, the Fourier transform of the summand equals

$$\begin{aligned}&i^{-mn/2}\det A^{-n/2}\det Z^{m/2+\alpha }p_Z(V+K)\\&\quad \cdot {\text {e}}\bigl ({\text {tr}}((V+K)^{\mathsf {T}}A(V+K)Z)/2- {\text {tr}}(H^{\mathsf {T}}A V)\bigr ). \end{aligned}$$

The summands in the theta series are Schwartz functions as A denotes a positive definite quadratic form. Hence, we apply the Poisson summation formula (4.1), and obtain

$$\begin{aligned} \vartheta _{{H},{K}}(-Z^{-1})&= i^{-mn/2}\det A^{-n/2}\det Z^{m/2+\alpha }{\text {e}}\bigl ( {\text {tr}}(H^{\mathsf {T}}AK)\bigr )\\&\quad \cdot \sum \limits _{V\in K+A^{-1}{\mathbb {Z}}^{m\times n}}p_Z(V){\text {e}}\bigl ({\text {tr}}(V^{\mathsf {T}}AVZ)/2- {\text {tr}}(H^{\mathsf {T}}AV)\bigr ) \\&= i^{-mn/2}\det A^{-n/2}\det Z^{m/2+\alpha }{\text {e}}\bigl ( {\text {tr}}(H^{\mathsf {T}}AK)\bigr )\\&\quad \cdot \sum \limits _{J\in A^{-1}{\mathbb {Z}}^{m\times n} {\text {mod}} {\mathbb {Z}}^{m\times n} } \vartheta _{{J+K},{-H}}(Z), \end{aligned}$$

which completes the proof. \(\square \)

Example 4.8

For \(m\equiv 0\ ({\text {mod}} 8)\), we choose an even unimodular matrix \(A\in {\mathbb {Z}}^{m\times m}\), which means in particular that \(\det A=1\) and \(A^{-1}\in {\mathbb {Z}}^{m\times m}\). Considering the theta series

$$\begin{aligned} \vartheta _{{\mathrm {O}},{\mathrm {O}}}(Z)=\det Y^{-\alpha /2}\sum \limits _{U\in {\mathbb {Z}}^{m\times n}}p(UY^{1/2}){\text {e}}\bigl ({\text {tr}}(U^{\mathsf {T}}AUZ)/2\bigr ), \end{aligned}$$

we have \(\vartheta _{{\mathrm {O}},{\mathrm {O}}}(Z+S)=\vartheta _{{\mathrm {O}},{\mathrm {O}}}(Z)\) for any symmetric matrix \(S \in {{\mathbb {Z}}}^{{n}\times {n}}\) by Lemma 4.2 and \(\vartheta _{{\mathrm {O}},{\mathrm {O}}}(-Z^{-1})=\det Z^{m/2+\alpha }\vartheta _{{\mathrm {O}},{\mathrm {O}}}(Z)\) for a polynomial p as chosen in Proposition 4.7. Thus, \(\vartheta _{{\mathrm {O}},{\mathrm {O}}}\) is a non-holomorphic Siegel modular form of weight \(m/2+\alpha \) on the full Siegel modular group \(\varGamma _n\).

4.2 Theta series for indefinite quadratic forms

In this section, we consider theta series associated with non-degenerate symmetric matrices \(A\in {{\mathbb {Z}}}^{{m}\times {m}}\) with signature (rs), where \(s\ge 0\). As described in Remark 2.3, we decompose \(A=A^{+}+A^{-}\) by employing the matrix of normalized eigenvectors \(S\in {\mathbb {R}}^{m\times m}\) so that we obtain the associated majorant matrix \(M=A^{+}-A^{-}\) and the projections \(U^{\pm }\) of U into the positive and negative subspaces of \({\mathbb {R}}^{m\times n}\) respectively. We replace the polynomials p that were defined in Proposition 4.7 by functions of the form

$$\begin{aligned} g(U):=\exp \bigl ( -{\text {tr}}{\Delta }_{M}/8\pi \bigr )\bigl (P(U)\bigr )\exp \bigl (2\pi {\text {tr}}(U^{\mathsf {T}}A^{-}U )\bigr ), \end{aligned}$$
(4.11)

where \(P\in {\mathcal {P}}_{\alpha +\beta }^{m,n}\) is given as the product \(P(U)=P_{\alpha }(U^+)\cdot P_{\beta }(U^-)\) with \(P_{\alpha }\in {\mathcal {P}}_{\alpha }^{m,n}\) and \(P_{\beta }\in {\mathcal {P}}_{\beta }^{m,n}\). For \(\alpha -\beta =\lambda +s\), we know from Sect. 3.2 that these functions are solutions of \({\mathcal {D}}_A f=\lambda \cdot I\cdot f\). Of course, we can also replace g by a linear combination of functions of this type, under the assumption that \((\alpha ,\beta )\in {\mathbb {N}}_0^2\) such that \(\alpha -\beta =\lambda +s\), to construct modular Siegel theta series. However, it is sufficient for the proof of Theorem 1.5 and simplifies the following calculations just to consider g as defined above, since these functions in particular include the basis elements of the vector space of solutions of \({\mathcal {D}}_A f=\lambda \cdot I\cdot f\).

In analogy with the last section, we define

$$\begin{aligned} g_Z(U):=\exp \bigl ( -{\text {tr}}({\Delta }_{M} Y^{-1})/8\pi \bigr )\bigl (P(U)\bigr )\exp \bigl (2\pi {\text {tr}}(U^{\mathsf {T}}A^{-}U Y )\bigr ) \end{aligned}$$

and

$$\begin{aligned} f_Z(U):=g_Z(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr ). \end{aligned}$$

For \(s=0\), we get back the functions from Lemma 4.6, so we use the same notation.

Lemma 4.9

The Fourier transform of \(f_Z(U)=g_Z(U){\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2\bigr )\) is

$$\begin{aligned} \widehat{f_Z}(V)=i^{-mn/2} (-1)^{\beta s} |\det A|^{-n/2} \det (-Z^{-1})^{r/2+\alpha }&\det {\overline{Z}}^{-(s/2+\beta )}f_{-Z^{-1}}( V). \end{aligned}$$

Proof

We change the basis of \({\mathbb {R}}^m\) by the substitution of \(U\mapsto SU\) to obtain a part that depends on the first r rows of U (again, we denote this part of the matrix by \(U_r\)) and one part that depends on the last s rows of U (analogously, we denote this part by \(U_s\)):

$$\begin{aligned} \begin{aligned} \int \limits _{{\mathbb {R}}^{m\times n}}g_Z(U)&{\text {e}}\bigl ( {\text {tr}}( U^{\mathsf {T}}A UZ)/2+{\text {tr}}(U^{\mathsf {T}}AV)\bigr )dU\\&=\int \limits _{{\mathbb {R}}^{m\times n}}\exp \bigl ( -{\text {tr}}({\Delta }_{M} Y^{-1})/8\pi \bigr )\bigl (P(U)\bigr )\\&\quad \cdot \exp \bigl (2\pi {\text {tr}}(U^{\mathsf {T}}A^{-}U Y )+\pi i{\text {tr}}(U^{\mathsf {T}}AUZ)+2\pi i{\text {tr}}(U^{\mathsf {T}}AV)\bigr )dU \\&=\det S^n\int \limits _{{\mathbb {R}}^{m\times n}}\exp \bigl ( -{\text {tr}}({\Delta }Y^{-1})/8\pi \bigr )\bigl (P(SU)\bigr )\\&\quad \cdot \exp \bigl (-2\pi {\text {tr}}(U_s^{\mathsf {T}}U_sY )+\pi i{\text {tr}}(U^{\mathsf {T}}{\mathcal {I}}UZ)+2\pi i{\text {tr}}(U^{\mathsf {T}}{\mathcal {I}}S^{-1} V)\bigr )dU \end{aligned} \end{aligned}$$

We can split up the integral, as the polynomial P factors as a polynomial dependent on \(U_r\) and \(U_s\) respectively. We now apply the results for positive definite quadratic forms. By Lemma 4.6, we obtain

$$\begin{aligned} \begin{aligned} \int \limits _{{\mathbb {R}}^{r\times n}}\exp \biggl (-\frac{1}{8\pi }&{\text {tr}}\Bigl (\Bigl ( \frac{\partial }{\partial U_r}\Bigr )^{\mathsf {T}}\frac{\partial }{\partial U_r}Y^{-1}\Bigr )\biggr )\bigl (P_{\alpha }(U_r)\bigr )\\&\cdot {\text {e}}\bigl ({\text {tr}}(U_r^{\mathsf {T}}U_rZ)/2+{\text {tr}}(V_r^{\mathsf {T}}U_r)\bigr )dU_r\\&= i^{-rn/2} \det (-Z^{-1})^{r/2+\alpha }{\text {e}}\bigl (-{\text {tr}}(V_r^{\mathsf {T}}V_rZ^{-1})/2\bigr )\\&\cdot \exp \biggl (-\frac{1}{8\pi }{\text {tr}}\Bigl (\Bigl (\frac{\partial }{\partial V_r}\Bigr )^{\mathsf {T}}\frac{\partial }{\partial V_r}{\widetilde{Y}}^{-1}\Bigr )\biggr )\bigl (P_{\alpha }(V_r)\bigr ). \end{aligned} \end{aligned}$$
(4.12)

We treat the part that depends on the negative definite subspace like an expression that is associated with a positive definite quadratic form given by \(I_s\) and consider \(-{\overline{Z}}\in {\mathbb {H}}_n\) as variable in the Siegel upper half-space. Also note that by (2.1) we have

$$\begin{aligned} {\overline{Z}}^{-1}={\overline{Z}}^{-1}ZZ^{-1}={\overline{Z}}^{-1}({\overline{Z}}+2iY)Z^{-1}=Z^{-1}+2i{\overline{Z}}^{-1}YZ^{-1}=Z^{-1}+2i{\widetilde{Y}}. \end{aligned}$$
(4.13)

In particular, \({\text {Im}}\bigl ({\overline{Z}}^{-1}\bigr )={\text {Im}}(-Z^{-1})={\widetilde{Y}}\) and thus we have

$$\begin{aligned} \begin{aligned} \int \limits _{{\mathbb {R}}^{s\times n}}\exp \biggl (-\frac{1}{8\pi }&{\text {tr}}\Bigl (\Bigl (\frac{\partial }{\partial U_s}\Bigr )^{\mathsf {T}}\frac{\partial }{\partial U_s}Y^{-1}\Bigr )\biggr )\bigl (P_{\beta }(U_s)\bigr )\\&\cdot {\text {e}}\bigl (- {\text {tr}}(U_s^{\mathsf {T}}U_s{\overline{Z}})/2-{\text {tr}}(V_s^{\mathsf {T}}U_s)\bigr )dU_s\\&=i^{-sn/2}(-1)^{\beta s}\det ({\overline{Z}}^{-1})^{s/2+\beta }{\text {e}}\bigl ( {\text {tr}}(V_s^{\mathsf {T}}V_s{\overline{Z}}^{-1})/2\bigr )\\&\cdot \exp \biggl (-\frac{1}{8\pi }{\text {tr}}\Bigl (\Bigl (\frac{\partial }{\partial V_s}\Bigr )^{\mathsf {T}}\frac{\partial }{\partial V_s}{\widetilde{Y}}^{-1}\Bigr )\biggr )\bigl (P_{\beta }(V_s)\bigr ), \end{aligned} \end{aligned}$$
(4.14)

where we evaluate the Fourier transform for \(-V_s\), and use Property (4.5) and the identity \(P_{\beta }(-V_s)=(-1)^{\beta s} P_{\beta }(V_s)\) to rewrite the expression. We now consider the product of (4.12) and (4.14) (we make use of (4.13) again to rewrite the exponential factor) and obtain:

$$\begin{aligned} \int \limits _{{\mathbb {R}}^{m\times n}}&\exp \bigl ( -{\text {tr}}({\Delta }Y^{-1})/8\pi \bigr )(P(SU))\\&\quad \cdot \exp \bigl (-2\pi {\text {tr}}(U_s^{\mathsf {T}}U_sY )+\pi i{\text {tr}}(U^{\mathsf {T}}{\mathcal {I}}UZ)+2\pi i{\text {tr}}(U^{\mathsf {T}}{\mathcal {I}}V)\bigr )dU\\&=i^{-mn/2} (-1)^{\beta s} \det (-Z^{-1})^{r/2+\alpha }\det {\overline{Z}}^{-(s/2+\beta )}\exp \bigl ( -({\text {tr}}{\Delta }{\widetilde{Y}}^{-1})/8\pi \bigr )(P(SV))\\&\quad \cdot \exp \bigl (-\pi i {\text {tr}}(V^{\mathsf {T}}{\mathcal {I}}V Z^{-1})-2\pi {\text {tr}}(V_s^{\mathsf {T}}V_s{\widetilde{Y}})\bigr ) \end{aligned}$$

We evaluate this integral at \(S^{-1}V\) to complete the proof. Without loss of generality, we can assume that \(\det S>0\) and therefore write \(\det S^n\) as \(|\det A|^{n/2}\). \(\square \)

Thus, we can state a more general version of Proposition 4.7 for Siegel theta series for indefinite quadratic forms.

Proposition 4.10

Let \(\lambda =\alpha -\beta -s\) and let \(g:{\mathbb {R}}^{m\times n}\longrightarrow {\mathbb {R}}\) define a function from (4.11). The theta series of the form

$$\begin{aligned} \vartheta _{{H},{K}}(Z)=\det Y^{-\lambda /2}\sum \limits _{U\in H+{\mathbb {Z}}^{m\times n}} g(UY^{1/2}){\text {e}}\bigl ( {\text {tr}}(U^{\mathsf {T}}A UZ)/2+ {\text {tr}}(K^{\mathsf {T}}AU)\bigr ) \end{aligned}$$

transforms as follows:

$$\begin{aligned} \vartheta _{{H},{K}}(-Z^{-1})= & {} i^{-mn/2}(-1)^{(s/2+\beta )n+\beta s}|\det A|^{-n/2}\det Z^{(r-s)/2+\alpha -\beta }\\&\cdot {\text {e}}\bigl ( {\text {tr}}(H^{\mathsf {T}}A K)\bigr )\cdot \sum \limits _{J\in A^{-1}{\mathbb {Z}}^{m\times n} {\text {mod}} {\mathbb {Z}}^{m\times n} } \vartheta _{{J+K},{-H}}(Z) \end{aligned}$$

Proof

We use the same approach as in the proof of Proposition 4.7. By Property (4.5), we have \(g_{-Z^{-1}}(U)=\det {\widetilde{Y}}^{-(\alpha +\beta )/2}g(U{\widetilde{Y}}^{1/2})\), and thus we rewrite the theta series as

$$\begin{aligned} \begin{aligned} \vartheta _{{H},{K}}(-Z^{-1})&=\det {\widetilde{Y}}^{s/2+\beta }\sum \limits _{U\in {\mathbb {Z}}^{m\times n}}\Big \lbrace g_{-Z^{-1}}(U+H)\\&\quad \cdot {\text {e}}\bigl (- {\text {tr}}( (U+H)^{\mathsf {T}}A (U+H)Z^{-1})/2+ {\text {tr}}(K^{\mathsf {T}}A(U+H))\bigr )\Big \rbrace . \end{aligned} \end{aligned}$$

By Lemma 4.9, the Fourier transform of the summand equals

$$\begin{aligned}&i^{-mn/2}(-1)^{\beta s}|\det A|^{-n/2}\det Z^{r/2+\alpha }\det (-{\overline{Z}})^{s/2+\beta }g_Z(V+K)\\&\quad \cdot {\text {e}}\bigl ({\text {tr}}((V+K)^{\mathsf {T}}A(V+K)Z)/2- {\text {tr}}(H^{\mathsf {T}}A V)\bigr ). \end{aligned}$$

By (4.13), we have \({\widetilde{Y}}={\overline{Z}}^{-1}YZ^{-1}\) and thus rewrite

$$\begin{aligned} \det {\widetilde{Y}}^{s/2+\beta }\det Z^{r/2+\alpha }\det (-{\overline{Z}})^{s/2+\beta }= (-1)^{(s/2+\beta )n} \det Z^{(r-s)/2+\alpha -\beta }\det Y^{s/2+\beta }. \end{aligned}$$

As \(\det Y^{s/2+\beta } g_Z(U)=\det Y^{-\lambda /2}g(UY^{1/2})\) by Property (4.5), we have

$$\begin{aligned} \vartheta _{{H},{K}}&(-Z^{-1})\\ =&i^{-mn/2}(-1)^{(s/2+\beta )n+\beta s} |\det A|^{-n/2} \det Z^{(r-s)/2+\alpha -\beta }{\text {e}}\bigl ({\text {tr}}(H^{\mathsf {T}}AK)\bigr )\\&\quad \cdot \det Y^{-\lambda /2}\sum \limits _{V\in K+A^{-1}{\mathbb {Z}}^{m\times n}} g(VY^{1/2}){\text {e}}\bigl ({\text {tr}}(V^{\mathsf {T}}AVZ)/2-{\text {tr}}(V^{\mathsf {T}}AH)\bigr ). \end{aligned}$$

Again, we write

$$\begin{aligned}&\det Y^{-\lambda /2}\sum \limits _{V\in K+A^{-1}{\mathbb {Z}}^{m\times n}} g(VY^{1/2}){\text {e}}\bigl ( {\text {tr}}(V^{\mathsf {T}}AVZ)/2-{\text {tr}}(V^{\mathsf {T}}AH)\bigr )\\&\quad =\sum \limits _{J\in A^{-1}{\mathbb {Z}}^{m\times n} {\text {mod}} {\mathbb {Z}}^{m\times n} } \vartheta _{{J+K},{-H}}(Z), \end{aligned}$$

which completes the proof. \(\square \)

Example 4.11

We obtain examples of non-holomorphic Siegel modular forms on the full Siegel modular group if \(H=K=\mathrm {O}\) and A is an even unimodular matrix and additionally \(i^{mn/2}(-1)^{(s/2+\beta )n+\beta s}=1\) holds. Note that an even symmetric unimodular matrix of indefinite signature (rs) only exists when \(r-s\equiv 0\ ({\text {mod}} 8)\) and is isomorphic to \(H_2^k\oplus (\pm E_8)^{\ell }\) with \(k=\min \{r,s\}\) and \(\ell =|r-s|/8\), where \(H_2=\left( \begin{array}{ll} 0&{}1\\ 1&{}0 \end{array}\right) \) and \(E_8\) represents the equivalence class of all even unimodular positive definite matrices of rank 8 (note that we take \(E_8\) if \(r>s\) and \(-E_8\) if \(r<s\)), see for example Husemoller and Milnor [8, p. 24-26] for more details.

4.3 Proof of Theorem 1.5

In Sect. 3, we introduced the \(n\times n\) system of partial differential equations \({\mathcal {D}}_A f=\lambda \cdot I\cdot f\) and determined a basis for all the solutions f that additionally satisfy the growth condition \(f(U)\exp (-\pi {\text {tr}}(U^{\mathsf {T}}AU))\in {\mathcal {S}}({\mathbb {R}}^{m\times n})\) (see Proposition 3.12). In this section, we have determined the modular transformation behavior of the associated Siegel theta series \(\vartheta _{H,K,f,A}\) by explicitly calculating the transformation formulas for the generators \(Z\mapsto Z+S\) (see Lemma 4.2) and \(Z\mapsto -Z^{-1}\) (see Proposition 4.10) of the Siegel modular group. For an even matrix A and \(\lambda =\alpha -\beta -s\), the theta series \(\vartheta _{\mathrm {O},\mathrm {O},f,A}\) transforms like a Siegel modular form of genus n and weight \(m/2+\lambda \) on some congruence subgroup of \(\varGamma _n\). This proves Theorem 1.5.