1 Introduction

A generalized n-dimensional polytope in \(\mathbb {R}^n\) is a union of a finite number of convex n-dimensional polytopes in \(\mathbb {R}^n\). In this paper we study the Fourier–Laplace transform of n-dimensional generalized polytopes \(\mathcal {P}\) in \(\mathbb {R}^n\)

$$\begin{aligned} F_{\mathcal {P}}(\textbf{z})=\int _{\mathcal {P}} e^{\textbf{z}\cdot \textbf{x}} \,\textbf{dx}, \end{aligned}$$

restricted on nonzero complex multiples of rationally parameterizable hypersurfaces. Here bold symbols denote real or complex tuples and \(\textbf{z}\cdot \textbf{x}=\sum _j z_jx_j\). In the special case \(\textbf{z}=-i\textbf{s}\) for \(\textbf{s}\in \mathbb {R}^n\) we have the Fourier transform. A rationally parameterizable hypersurface (briefly rp-hypersurface) is a set \(\mathcal {S}\) of points in \(\mathbb {R}^n\) of the form

$$\begin{aligned} \mathcal {S}=\{\varvec{\sigma }(\textbf{t}): \textbf{t} \in D\}, \end{aligned}$$
(1)

where

$$\begin{aligned} \varvec{\sigma }(\textbf{t})=\begin{pmatrix}\sigma _1(t_1,\ldots ,t_{n-1})\\ \vdots \\ \sigma _n(t_1,\ldots ,t_{n-1})\end{pmatrix}, \end{aligned}$$

the functions \(\sigma _j\) are rational functions, \(j=1,\ldots ,n\), and \(D \subseteq \mathbb {R}^{n-1}\) is the common domain of the rational functions. In the case \(n=2\) we use the notion rationally parameterizable curve (briefly rp-curve) instead of rp-hypersurface and use the notation \(\mathcal {C}\) instead of \(\mathcal {S}\).

For example, using polar resp. spherical coordinates and the standard substitution \(t=\tan (\alpha /2)\), which implies \(\cos \alpha =(1-t^2)/(1+t^2)\) and \(\sin \alpha =2t/(1+t^2)\), one obtains that the unit circle

$$\begin{aligned} \mathcal {C}_1=\biggl \{\biggl (\frac{1-t^2}{1+t^2}, \frac{2t}{1+t^2}\biggr ): t \in \mathbb {R}\biggr \} \end{aligned}$$

(with the missing point \((-1,0)\)) is an rp-curve and that the 3-dimensional unit sphere

$$\begin{aligned} \mathcal {S}=\biggl \{\biggl (\frac{1-t_1^2}{1+t_1^2}\cdot \frac{1-t_2^2}{1+t_2^2}, \frac{2t_1}{1+t_1^2}\cdot \frac{1-t_2^2}{1+t_2^2}, \frac{2t_2}{1+t_2^2}\biggr ): t_1 \in \mathbb {R},\, t_2 \in [-1,1]\biggr \} \end{aligned}$$

(with the missing segment \((-\sqrt{1-\lambda ^2},0,\lambda )\), \(\lambda \in (-1,1)\)) is an rp-hypersurface. Clearly, also the hyperbola

$$\begin{aligned} \mathcal {C}_2=\biggl \{\biggl (t, \frac{1}{t}\biggr ): t \in \mathbb {R}\setminus \{0\}\biggr \} \end{aligned}$$

and the parabola

$$\begin{aligned} \mathcal {C}_3=\{(t, t^2): t \in \mathbb {R}\} \end{aligned}$$

are rp-curves. Since affine transformations do not violate the rationality this is also true for any circle, hyperbola, parabola and sphere.

With the function \(\varvec{\sigma }:\mathbb {R}^{n-1} \rightarrow \mathbb {R}^n\) we associate its normalized function, i.e., the function \(\hat{\varvec{\sigma }}:\mathbb {R}^{n-1} \rightarrow \mathbb {R}^{n-1}\) defined by

$$\begin{aligned} \hat{\varvec{\sigma }}(\textbf{t})=\begin{pmatrix}\displaystyle \frac{\sigma _2(t_1,\ldots ,t_{n-1})}{\sigma _1(t_1,\ldots ,t_{n-1})}\\ \vdots \\ \displaystyle \frac{\sigma _n(t_1,\ldots ,t_{n-1})}{\sigma _1(t_1,\ldots ,t_{n-1})} \end{pmatrix}. \end{aligned}$$

Note that \(\hat{\varvec{\sigma }}(\textbf{t})\) is only defined if \(\textbf{t} \in D \setminus \sigma _1^{-1}(0)\), where \(\sigma _1^{-1}(0)\) is the null set of \(\sigma _1\). For a set \(O \subseteq D\setminus \sigma _1^{-1}(0)\) let

$$\begin{aligned} \hat{\varvec{\sigma }}(O)=\{\hat{\varvec{\sigma }}(\textbf{t}): \textbf{t} \in O\}. \end{aligned}$$

In the following we need that, for an rp-hypersurface \(\mathcal {S}\) given by (1), there is an open subset O of \(D\setminus \sigma _1^{-1}(0)\) in \(\mathbb {R}^{n-1}\) satisfying two conditions:

  • Hyperplane condition: \(\varvec{\sigma }(O)\) is not contained in a hyperplane.

  • Inner point condition: There is a \(\textbf{t} \in O\) such that \(\hat{\varvec{\sigma }}(\textbf{t})\) is an inner point of \(\hat{\varvec{\sigma }}(O)\) in \(\mathbb {R}^{n-1}\).

Note that by the Implicit Function Theorem the inner point condition is fulfilled if the determinant of the Jacobian is not everywhere 0, i.e.,

$$\begin{aligned} \exists \, \textbf{t} \in O\,:\, \det \frac{\partial \hat{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}\ne 0. \end{aligned}$$
(2)

If the hyperplane and the inner point condition are satisfied for O, then we call \(\varvec{\sigma }(O)\) a significant part of an rp-hypersurface.

The main aim of the paper is the proof of the following theorem:

Theorem 1.1

Let \(\mathcal {P}_1\) and \(\mathcal {P}_2\) be generalized n-dimensional polytopes in \(\mathbb {R}^n\), let \(\gamma \) be a nonzero complex number and let \(\varvec{\sigma }(O)\) be a significant part of an rp-hypersurface. If

$$\begin{aligned} F_{\mathcal {P}_1}(\gamma \varvec{\sigma }(\textbf{t})) = F_{\mathcal {P}_2}(\gamma \varvec{\sigma }(\textbf{t}))\quad \ \forall \,\textbf{t} \in O, \end{aligned}$$

then \(\mathcal {P}_1=\mathcal {P}_2\).

In particular, one may apply Theorem 1.1 to almost all quadric hypersurfaces, i.e., null sets of an equation of the form

$$\begin{aligned} \frac{1}{2}\textbf{s}^{\varvec{T}}A \textbf{s} + \textbf{b}^{\varvec{T}}\textbf{s} + c = 0, \end{aligned}$$
(3)

where A is a symmetric matrix different from the zero matrix:

Theorem 1.2

If \(\mathcal {S}\) is a quadric hypersurface that does not contain a line but at least two points, then, up to an exceptional set of hypersurface measure zero, it is an rp-hypersurface with some parameter domain D and every open subset O of \(D\setminus \sigma _1^{-1}(0)\) in \(\mathbb {R}^{n-1}\) satisfies the hyperplane and the inner point condition.

An important tool is the explicit representation of the Fourier–Laplace transform of a convex polytope which is a central subject in the theory on the exponential valuation of polytopes built by Brion, Lawrence, Khovanskii, Pukhlikov, and Barvinok [2, 7, 17, 24] and has origins in results of Motzkin and Schoenberg (mentioned by Davis [9]), as well of Grunsky [15]. We recommend [3, 4] for studying this theory.

We show that the equality \(F_{\mathcal {P}_1}(\gamma \textbf{s})=F_{\mathcal {P}_2}(\gamma \textbf{s})\) for all \(\textbf{s}\in \varvec{\sigma }(O)\) can be extended to \(F_{\mathcal {P}_1}(\textbf{z})=F_{\mathcal {P}_2}(\textbf{z})\) for all \(\textbf{z}\in \mathbb {C}^n\). This is by no means clear in view of the examples in Sect. 2. But having this extended equality the assertion in Theorem 1.1 follows from the injectivity of the Fourier(–Laplace) transform, see e.g. [21].

The reconstruction of polytopes from sparse data of moments as well as from sparse data of the Fourier transform has been studied intensively in recent years. This concerns convex polytopes [8, 10, 12, 13, 16, 20] and also non-convex polygons [27] and generalized polytopes [14].

The original motivation for this paper stems from physics: In small-angle X-ray scattering and partly also in wide-angle X-ray scattering one can detect the absolute value of the Fourier transform on a sphere, the Ewald sphere, see e.g. [1, 11, 25]. Therefore uniqueness questions and phase retrieval algorithms are important subjects for application in crystallography and other domains. Related results (without the restriction to hypersurfaces) can be found e.g. in [5, 6, 18]. Meanwhile we could use the results of this paper in order to prove the following theorem [11].

Theorem 1.3

Let \(\mathcal {P}_1\) and \(\mathcal {P}_2\) be convex 3-dimensional polytopes in \(\mathbb {R}^3\) and let \(\varvec{\sigma }(O)\) be a significant part of an rp-hypersurface in \(\mathbb {R}^3\). If

$$\begin{aligned} |F_{\mathcal {P}_1}(-i\varvec{\sigma }(\textbf{t}))| = |F_{\mathcal {P}_2}(-i\varvec{\sigma }(\textbf{t}))|\quad \ \forall \,\textbf{t} \in O, \end{aligned}$$

then \(\mathcal {P}_1\) and \(\mathcal {P}_2\) coincide up to translation and/or reflection in a point.

Our method also implies a result on the null set \(N(\mathcal {P})\) of the Fourier–Laplace transform of an n-dimensional generalized polytope \(\mathcal {P}\) in \(\mathbb {R}^n\), i.e., on

$$\begin{aligned} N(\mathcal {P})=\{\textbf{z}\in \mathbb {C}^n: F_{\mathcal {P}}(\textbf{z})=0\}. \end{aligned}$$

Theorem 1.4

Let \(\mathcal {P}\) be generalized n-dimensional polytope in \(\mathbb {R}^n\). Then the null set of the Fourier–Laplace transform of \(\mathcal {P}\) does not contain a \(\gamma \)-multiple of a significant part of an rp-hypersurface, where \(\gamma \) is a nonzero complex number.

In particular it follows immediately that the Fourier–Laplace transform of a polytope does not contain complex algebraic varieties of the form \(\{\textbf{z} \in \mathbb {C}^n: z_1^2+\cdots +z_n^2=\gamma ^2\}\), where \(\gamma \) is a nonzero complex number—a result from [26] (see also [19]) which is related to the Pompeiu problem, see [22, 23].

In Sect. 2 we show that we cannot simply omit assumptions on the hypersurface \(\mathcal {S}\). We work out several auxiliary results on rational functions and exponential functions in Sect. 3. The proof of Theorems 1.1 and 1.4 is contained in Sect. 4 and Theorem 1.2 is proved in Sect. 5. Finally two open problems are presented in Sect. 6.

2 The Role of the Conditions on the Hypersurface

First we show that we cannot simply omit some conditions. We always choose \(\gamma =-i\), i.e., we work with the Fourier transform. The following example shows that the hyperplane condition is essential. Consider the hyperplane

$$\begin{aligned} \mathcal {S}= \{(1,t_1,\ldots ,t_{n-1})^{\varvec{T}}: (t_1,\ldots ,t_{n-1})^{\varvec{T}}\in \mathbb {R}^{n-1} \}. \end{aligned}$$

Then \(\mathcal {S}\) is an rp-hypersurface and \(O=\mathbb {R}^{n-1}\) satisfies the inner point condition, but clearly not the hyperplane condition. Let \(\mathcal {P}_1\) be a generalized n-dimensional polytope in \(\mathbb {R}^n\) and let \(\mathcal {P}_2=\mathcal {P}_1+\textbf{h}\) be a translation of \(\mathcal {P}_1\). It is well known that

$$\begin{aligned} F_{\mathcal {P}_2}(-i\textbf{s})= \textrm{e}^{-{i}\textbf{s} \cdot \textbf{h} }F_{\mathcal {P}_1}(-i\textbf{s}). \end{aligned}$$

Now let

$$\begin{aligned} \textbf{h}=(2\pi ,0,\ldots ,0)^{\varvec{T}}. \end{aligned}$$

Then

$$\begin{aligned} \textbf{s} \cdot \textbf{h} = 2\pi ,\ \text { i.e., }\ \textrm{e}^{-{i}\textbf{s} \cdot \textbf{h}}=1\quad \ \forall \,\textbf{s} \in \mathcal {S}\end{aligned}$$

and hence

$$\begin{aligned} F_{\mathcal {P}_1}(-i\textbf{s})=F_{\mathcal {P}_2}(-i\textbf{s}) \quad \ \forall \,\textbf{s} \in \mathcal {S}. \end{aligned}$$

But, clearly \(\mathcal {P}_1 \ne \mathcal {P}_2\).

Now we show that we cannot simply omit the assumption that the functions \(\sigma _j(t_1,\ldots ,t_{n-1})\), \(j=1,\ldots ,n\), are rational. Let O be a sufficiently small open ball in \(\mathbb {R}^{n-1}\) with center \(\textbf{t}^*=(\pi /3)\textbf{1}\), where \(\textbf{1}\) is the all-ones vector. Moreover, let for \(\textbf{t} \in O\)

$$\begin{aligned} \sigma _j(t_1,\ldots ,t_{n-1})={\left\{ \begin{array}{ll} t_{j}&{}\text { if } j \in \{1,\ldots ,n-1\},\\ \displaystyle \arccos \frac{1}{2^n\cos t_1\cdots \cos t_{n-1}}&{}\text { if } j=n.\end{array}\right. } \end{aligned}$$

Then O satisfies the hyperplane condition because

$$\begin{aligned} \frac{\partial ^2 \sigma _n(t_1^*,\ldots ,t_{n-1}^*)}{\partial t_1^2}=-\frac{8}{3}\sqrt{3}\ne 0. \end{aligned}$$

Moreover, by (2), O satisfies the inner point condition because

$$\begin{aligned} \det \frac{\partial \hat{\varvec{\sigma }}(\textbf{t}^*)}{\partial \textbf{t}}= \left| \begin{array}{ccccc} -3/\pi &{} 3/\pi &{} 0&{} \cdots &{}0 \\ -3/\pi &{} 0 &{}3/\pi &{}\cdots &{}0\\ &{} &{} &{} \cdots &{}\\ -3/\pi &{} 0 &{} 0 &{} \cdots &{}3/\pi \\ -6/\pi &{} -3/\pi &{} -3/\pi &{} \cdots &{}-3/\pi \end{array}\right| = n\biggl (\!-\frac{3}{\pi }\biggr )^{\!n-1}\!\ne 0. \end{aligned}$$

Let

$$\begin{aligned} \mathcal {P}_1=\{\textbf{x}\in \mathbb {R}^n: -\textbf{1} \le \textbf{x} \le \textbf{1}\}\quad \text {and}\quad \mathcal {P}_2=\{\textbf{x} \in \mathbb {R}^n: -2\cdot \textbf{1} \le \textbf{x} \le 2\cdot \textbf{1}\}. \end{aligned}$$

It is not difficult to check that

$$\begin{aligned} F_{\mathcal {P}_1}(-i\textbf{s})&=\frac{2^n}{s_1\cdots s_n} \sin s_1\cdots \sin s_n,\\ F_{\mathcal {P}_2}(-i\textbf{s})&=\frac{2^n}{s_1\cdots s_n} \sin 2s_1\cdots \sin 2s_n. \end{aligned}$$

This implies

$$\begin{aligned} F_{\mathcal {P}_2}(-i\textbf{s})-F_{\mathcal {P}_1}(-i\textbf{s}) = \frac{2^{2n}}{s_1\cdots s_n} \sin s_1\cdots \sin s_n\biggl (\cos s_1\cdots \cos s_n-\frac{1}{2^n}\biggr ) \end{aligned}$$

and hence

$$\begin{aligned} F_{\mathcal {P}_1}(-i\textbf{s})=F_{\mathcal {P}_2}(-i\textbf{s})\ \quad \forall \,\textbf{s} \in \varvec{\sigma }(O) \end{aligned}$$

since

$$\begin{aligned} \cos s_1\cdots \cos s_n- \frac{1}{2^n} = 0 \ \quad \forall \, \textbf{s} \in \varvec{\sigma }(O). \end{aligned}$$

Accordingly, \(\varvec{\sigma }(O)\) cannot be a part of an rp-hyperplane because otherwise it would be even a significant part of an rp-hyperplane and hence, by Theorem 1.1, \(\mathcal {P}_1=\mathcal {P}_2\), which is clearly false.

Of course, one can also ask whether the inner point condition is really necessary. In Theorem 1.1 we assume the equality \(F_{\mathcal {P}_1}(\gamma \varvec{\sigma }(\textbf{t}))=F_{\mathcal {P}_2}(\gamma \varvec{\sigma }(\textbf{t}))\) for all \(\textbf{t}\in O\). By continuity, it is sufficient to require this equality only for the rational points of O, i.e., for a countable set of points. In Sect. 6 we pose the open problem whether it is sufficient to have the equality \(F_{\mathcal {P}_1}(\gamma \varvec{\sigma }(\textbf{t}))=F_{\mathcal {P}_2}(\gamma \varvec{\sigma }(\textbf{t}))\) only for elements \(\textbf{t}\) of a finite set T.

3 Independence Theorems over the Field of Rational Functions

Let, as usual, \(\mathbb {C}[z]\) (resp. \(\mathbb {C}[\textbf{z}]\)) be the set of polynomials in the variable z (resp. in the variables that are the components of \(\textbf{z}\)) with complex coefficients. Moreover, let \(\mathbb {C}(z)\) (resp. \(\mathbb {C}(\textbf{z})\)) be the set of rational fractions in the variable z (resp. in the variables that are the components of \(\textbf{z}\)). These polynomials and rational fractions define functions from \(\mathbb {C}\) (resp. \(\mathbb {C}^n\)) into \(\mathbb {C}\), which we call also polynomials (resp. rational functions). For brevity we do not distinguish between the formal terms and the corresponding functions.

In the following let \(\mathcal {O}\) be the zero function, i.e., the function that is everywhere 0 on a certain domain which is given by functions that are included in the concrete context. In the same way we consider constant functions.

Theorem 3.1

Let \(p_k,r_k \in \mathbb {C}(z)\) for all \(k \in [m]\), and let \(r_k-r_{k'}\) be not constant for \(k \ne k'\). Let I be an open interval in \(\mathbb {R}\). If

$$\begin{aligned} \sum _{k=1}^m p_k(x)\textrm{e}^{r_k(x)}=0 \quad \ \forall \, x \in I, \end{aligned}$$
(4)

then

$$\begin{aligned} p_k = \mathcal {O}\quad \ \forall \, k \in [m]. \end{aligned}$$
(5)

Proof

We may assume that \(p_k\in \mathbb {C}[z]\) for all \(k \in [m]\), i.e., these functions are polynomials, because, if necessary, we can multiply (4) by the main denominator of the rational functions \(p_k\), \({k \in [m]}\).

Let S be the (finite) set of all singularities of all involved rational functions \(r_k\), \({k \in [m]}\). Since all functions are analytic on \({\mathbb {C}\setminus S}\), we have even

$$\begin{aligned} \sum _{k=1}^m p_k(z)\,\textrm{e}^{r_k(z)}=0 \quad \ \forall \,z \in \mathbb {C}\setminus S. \end{aligned}$$
(6)

Recall that we can write each rational function r having k singularities, i.e., poles including infinity, in the form

$$\begin{aligned} r=c+\sum _{h=1}^g s^{(h)}, \end{aligned}$$
(7)

where c is a constant and the items \(s^{(h)}\) are the principal parts of the Laurent series about the pole \(z_h\). Here each \(s^{(h)}\), \(h=1,\ldots ,g\), has the form

$$\begin{aligned} s^{(h)}(z)={\left\{ \begin{array}{ll} \displaystyle \sum _{l=1}^{m_h} a_{-l}^{(h)}(z-z_h)^{-l}&{}\quad \text {if}\; z_h\; \text {is a finite pole of order} \;m_h ,\\ \displaystyle \sum _{l=1}^{m_h} a_{l}^{(h)}(z-z_h)^{l}&{}\quad \text {if}\; z_h=\infty \; \text {is a pole of order}\;m_h. \end{array}\right. } \end{aligned}$$

The representation (7) can be obtained by decomposition of the rational function into partial fractions. We allow dummy poles of order 0, i.e., the corresponding principal part is the zero function, hence we may assume in the following that the functions \(r_j\), \(j=1,\ldots ,m\), have the same poles.

Let \(\sigma _k\) be the sum of the orders of all poles of \(r_k\) and let \(\sigma =\sum _{k=1}^m \sigma _k\). We proceed by induction on \(\sigma \). If \({\sigma =0}\), then \(m=1\) and \(r_1(z)\) is constant which implies that \(p_1=\mathcal {O}\). Thus we consider the induction step from \(<\sigma \) to \(\sigma \).

If \(m=1\), then the assertion is trivially true. Thus let \(m > 1\). We study an arbitrary pole \(z_0\). In the following we assume that \(z_0\) is finite. The case \(z_0= \infty \) can be treated analogously or using the standard transformation \(z\mapsto 1/z\). Let \(s_k\) be the principal part of the Laurent series of \(r_k\), \(k \in [m]\), about the pole \(z_0\). Let \(\omega \) be the maximal order of \(z_0\) extended over all functions \(r_k\), \(k \in [m]\). Then we can write for all \(k \in [m]\) the principal part \(s_k\) in the form

$$\begin{aligned} s_k(z)=\sum _{l=1}^{\omega } a_{-l,k} (z-z_0)^{-l}, \end{aligned}$$

where \(a_{-\omega ,k} \ne 0\) for at least one \(k \in [m]\), say for \(k=k^*\). We put

$$\begin{aligned} z=z_0+\lambda \textrm{e}^{{i} \varphi } \end{aligned}$$

with \(\lambda , \varphi \in \mathbb {R}\) and consider the limit process \(\lambda \rightarrow 0\). Note that

$$\begin{aligned} (z-z_0)^{-\omega }=\lambda ^{-\omega } \textrm{e}^{-{i}\omega \varphi }. \end{aligned}$$

First we explain how we choose \(\varphi \in [0, 2\pi )\). Let k and \(k'\) be different members of [m] and let \(l \in [\omega ]\). If \(a_{-l,k} \ne a_{-l,k'}\), then there are only finitely many choices for \(\varphi \in [0, 2\pi )\) such that \(\Re (a_{-l,k}\textrm{e}^{-{i}l \varphi })= \Re (a_{-l,k'}\textrm{e}^{-{i} l\varphi })\) because \((a_{-l,k}-a_{-l,k'})\textrm{e}^{-{i}l \varphi }\) has to be purely imaginary. Hence we can choose \(\varphi \in [0, 2\pi )\) in such a way that

$$\begin{aligned} \Re (a_{-l,k}\textrm{e}^{-{i}l \varphi }) = \Re (a_{-l,k'}\textrm{e}^{-{i} l\varphi })\quad \Longrightarrow \quad a_{-l,k} = a_{-l,k'}\quad \forall \, l \in [\omega ]. \end{aligned}$$
(8)

Now we order the \(\omega \)-tuples

$$\begin{aligned} \textbf{a}_k=\bigl (\Re (a_{-\omega ,k}\textrm{e}^{-{i}\omega \varphi }),\ldots ,\Re (a_{-2,k}\textrm{e}^{-{i} 2\varphi }),\Re (a_{-1,k}\textrm{e}^{-{i}\varphi })\bigr ),\ \quad k\in [m], \end{aligned}$$

lexicographically. Obviously, there is some number \(\kappa \in [m]\) such that \(1 \le \kappa \le m\) and after an adequate renumbering

$$\begin{aligned} \textbf{a}_1=\ldots = \textbf{a}_{\kappa } \end{aligned}$$

and

$$\begin{aligned} \textbf{a}_l <_{lex } \textbf{a}_1 \quad \forall \, l \in \{\kappa +1,\ldots ,m\}. \end{aligned}$$

In view of (8) we have

$$\begin{aligned} s:=s_1 = \ldots = s_{\kappa }. \end{aligned}$$

Let \(r_k'=r_k-s\), \(k \in [m]\). Note that \(r_k'\) is analytical at \(z = z_0\) for all \(k \in [\kappa ]\). Multiplying (6) by \(\textrm{e}^{-s}\) leads to

$$\begin{aligned} \sum _{k=1}^m p_k(z)\textrm{e}^{r_k'(z)}=0\ \quad \forall \,z \in \mathbb {C}\setminus S. \end{aligned}$$
(9)

If

$$\begin{aligned} \sum _{k=1}^{\kappa } p_k(z)\textrm{e}^{r_k'(z)}=0\ \quad \forall \,z \in \mathbb {C}\setminus S, \end{aligned}$$
(10)

then first \(p_k = \mathcal {O}\) for all \(k \in [\kappa ]\) and then also \(p_k = \mathcal {O}\) for all \(k \in \{\kappa +1,\ldots ,m\}\), i.e., (5), follow from the induction hypothesis. Thus assume that (10) is not true (which implies \({\kappa < m}\)). We can develop \(\sum _{k=1}^\kappa p_k(z)\textrm{e}^{r_k'(z)}\) into a power series having a first nonzero term \(b_d (z-z_0)^d\), where \(d \in \mathbb {N}\) and \(b_d\ne 0\). Let

$$\begin{aligned} p_k'=(z-z_0)^{-d}p_k,\ \quad k \in [m]. \end{aligned}$$

From (9) we obtain

$$\begin{aligned} \sum _{k=1}^m p_k'(z)\textrm{e}^{r_k'(z)}=0\ \quad \forall \,z \in \mathbb {C}\setminus S. \end{aligned}$$
(11)

In view of the power series expansion we have

$$\begin{aligned} \lim \sum _{k=1}^{\kappa } p_k'(z)\textrm{e}^{r_k'(z)}=b_d\ne 0, \end{aligned}$$
(12)

where the limit can be arbitrarily taken for \(z \rightarrow z_0\), but in particular also for \(\lambda \rightarrow 0\).

Now let \(k \in \{\kappa +1,\ldots ,m\}\) and \(\lambda \rightarrow 0\). The principal part \(s_k'\) of the Laurent series of \(r_k'\) about \(z=z_0\) has the form

$$\begin{aligned} s_k'=\sum _{l=1}^{\omega _k} b_{-l,k} (z-z_0)^{-l}, \end{aligned}$$

where \(1 \le \omega _k \le \omega \). Let

$$\begin{aligned} \beta _k=\Re (b_{-\omega _k,k}\textrm{e}^{-{i} \omega _k \varphi }). \end{aligned}$$

In view of the lexicographic ordering,

$$\begin{aligned} \beta _k < 0. \end{aligned}$$

Obviously, for sufficiently small \(\lambda > 0\),

$$\begin{aligned} \Re (r_k'(z))=\beta _k \lambda ^{- \omega _k} (1+O(\lambda )) \le \frac{\beta _k}{2} \lambda ^{-1}. \end{aligned}$$

Consequently,

$$\begin{aligned} |p_k'(z)\textrm{e}^{r_k'(z)}|=O(\lambda ^{-d})|\textrm{e}^{\Re (r_k'(z))}|=O(\lambda ^{-d})\textrm{e}^{\beta _k\lambda ^{-1}/2}=o(1). \end{aligned}$$

Thus

$$\begin{aligned} \sum _{k=\kappa +1}^{m} p_k'(z)\textrm{e}^{r_k'(z)}=o(1). \end{aligned}$$

This is a contradiction to (11) and (12). \(\square \)

The next lemma is folklore, but in order to make the paper self-contained we prove it.

Lemma 3.2

Let \(P\in \mathbb {C}(\textbf{z})\). If there is an open subset O of \(\mathbb {R}^n\) such that \({P(\textbf{x})=0}\) for all \(\textbf{x} \in O\), then \(P=\mathcal {O}\).

Proof

First let P be a polynomial, i.e., \(P \in \mathbb {C}[\textbf{z}]\). We proceed by induction on n. The base case \(n=1\) follows from the fundamental theorem of algebra. For the induction step \(n-1 \rightarrow n\) we write P in the form

$$\begin{aligned} P=\sum _{j=0}^ka_j z_n^j, \end{aligned}$$

where \(a_j\in \mathbb {C}[z_1,\ldots ,z_{n-1}]\). Let \(\pi \) be the projection from \(\mathbb {R}^n\) into \(\mathbb {R}^{n-1}\) that deletes the n-th component. For each, but fixed \((x_1,\ldots ,x_{n-1}) \in \pi (O)\) there exist infinitely many values for \(x_n\) such that \((x_1,\ldots ,x_{n-1},x_n) \in O\). Hence, by the fundamental theorem of algebra,

$$\begin{aligned} a_j(x_1,\ldots ,x_{n-1})=0\ \ \quad \ \forall \, (x_1,\ldots ,x_{n-1}) \in \pi (O)\ \text { and }\ \forall \,j. \end{aligned}$$

Since \(\pi (O)\) is an open subset in \(\mathbb {R}^{n-1}\), the induction hypothesis yields \(a_j=\mathcal {O}\) for all j and hence also \(P=\mathcal {O}\). The rational case \(P \in \mathbb {C}(\textbf{z})\) can be reduced to the polynomial case by multiplication with the denominator. \(\square \)

Theorem 3.3

Let \(\mathcal {S}=\{\varvec{\sigma }(\textbf{t}): \textbf{t} \in D\}\) be an rp-hypersurface and let O be an open subset of D in \(\mathbb {R}^{n-1}\) that satisfies the hyperplane condition. Let \(\textbf{v}_k\), \(k \in [m]\), be distinct points in \(\mathbb {R}^n\), let \(P_k \in \mathbb {C}(\textbf{z})\) for all \(k \in [m]\) and let \(\gamma \) be a nonzero complex number. If

$$\begin{aligned} \sum _{k=1}^m P_k(\textbf{s})\textrm{e}^{\gamma \textbf{v}_k \cdot \textbf{s}} = 0 \quad \ \ \forall \, \textbf{s} \in \varvec{\sigma }(O), \end{aligned}$$

then

$$\begin{aligned} P_k(\textbf{s}) = 0\ \quad \ \ \forall \, \textbf{s} \in \varvec{\sigma }(O)\ \text { and }\ \forall \,k \in [m]. \end{aligned}$$

Proof

Let \(\textbf{s}_1\) be a fixed element of \(\varvec{\sigma }(O)\) and let \(\textbf{s}_1=\varvec{\sigma }(\textbf{t}_1)\). We have to show that \(P_k(\textbf{s}_1)=0\) for all \(k\in [m]\). By the supposition there are further points \(\textbf{s}_l \in \varvec{\sigma }(O)\), \(l=2,\ldots ,n+1,\) such that the points \(\textbf{s}_l\), \(l \in [n+1]\), are not contained in a hyperplane. Let \(\textbf{t}_l\), \(l \in [n+1]\), be those points of O for which \(\textbf{s}_l=\varvec{\sigma }(\textbf{t}_l)\). Let \(\textbf{t}_l=(t_{1,l},\ldots ,t_{n-1,l})^{\varvec{T}}\). We may assume that the numbers \(t_{1,l}\), \(l \in [n+1]\), are distinct because otherwise we could disturb a little bit the numbers \(t_{1,l}\) and would obtain the desired situation. Let \(q_j(t)\) be the interpolation polynomial such that, for all \(l \in [n+1]\), we have \(q_j(t_{1,l}) = t_{j,l}\), \(j \in \{2,\ldots ,n-1\}\). Moreover, let \(q_1(t)=t\). Further let \(\textbf{q}=(q_1,\ldots ,q_{n-1})^{\varvec{T}}.\) Then \(\textbf{q}(t_{1,l})=\textbf{t}_l\) for all \(l \in [n+1]\). Now we consider

$$\begin{aligned} \varvec{\rho }(t)=\varvec{\sigma }(\textbf{q}(t)). \end{aligned}$$

By construction,

$$\begin{aligned} \varvec{\rho }(t_{1,l}) = \textbf{s}_l \quad \ \forall \,l \in [n+1]. \end{aligned}$$

Since O is open there exists an open interval I containing \(t_{1,1}\) such that \(\textbf{q}(t) \in O\) for all \(t \in I\). Let for \(k \in [m]\)

$$\begin{aligned} r_k=\gamma \textbf{v}_k \cdot \varvec{\rho }\quad \text { and }\quad p_k=P_k(\varvec{\rho }). \end{aligned}$$

By assumption we have

$$\begin{aligned} \sum _{k=1}^m p_k(t)\textrm{e}^{r_k(t)} = 0 \quad \ \ \forall \,t \in I. \end{aligned}$$

Of course, \(p_k, r_k \in \mathbb {C}(t)\). We further show that \(r_k-r_{k'}\) is not constant for \(k \ne k'\). Indeed, assume that \(r_k-r_{k'}=c\). Then \((\textbf{v}_k-\textbf{v}_{k'}) \cdot \varvec{\rho }={c}/{\gamma }\) and hence the points \(\textbf{s}_l = \varvec{\rho }(t_{1,l})\), \(l \in [n+1]\), are contained in a hyperplane, a contradiction to the choice of these points. Finally we derive from Theorem 3.1 that \(p_k=\mathcal {O}\) for all \(k\in [m]\) and hence, in particular \(0=p_k(t_{1,1})=P_k(\varvec{\rho }(t_{1,1}))=P_k(\textbf{s}_1)\). \(\square \)

A subset S of \(\mathbb {R}^n\) is called a significant set if \(s_1 \ne 0\) for all \(\textbf{s} \in S\), and if the associated set

$$\begin{aligned} \hat{S}=\biggl \{\biggl (\frac{s_2}{s_1},\ldots ,\frac{s_n}{s_1}\biggr ): \textbf{s} \in S\biggr \} \end{aligned}$$

contains an open set in \(\mathbb {R}^{n-1}\). Note that \(\varvec{\sigma }(O),\) is a significant set if \(O \subseteq D\setminus \sigma _1^{-1}(0)\) is open in \(\mathbb {R}^{n-1}\) and satisfies the inner point condition.

Recall that a function \(f:\mathbb {C}^n \rightarrow \mathbb {C}\) is called homogeneous of degree d, where d is an integer, if \(f(\lambda \textbf{z})= \lambda ^d f(\textbf{z})\) for all \(\lambda \in \mathbb {C}\setminus \{0\}\) and all \(\textbf{z}\) of the domain. We exclude here \(\lambda =0\) because perhaps \(f(\textbf{0})\) is not defined.

Lemma 3.4

Let \(P \in \mathbb {C}(\textbf{z})\) be homogeneous of degree d and let S be a significant subset of \(\mathbb {R}^n\). If

$$\begin{aligned} P(\textbf{s})=0 \quad \ \forall \,\textbf{s} \in S, \end{aligned}$$
(13)

then \(P = \mathcal {O}\).

Proof

Let \(Q \in \mathbb {C}(\textbf{y})\) be defined by \(Q(\textbf{y})=P(1,y_1,\ldots ,y_{n-1})\), \(\textbf{y}=(y_1,\ldots ,y_{n-1})\). Because of \(P(\textbf{z})=z_1^dP(1,{z_2}/{z_1},\ldots ,{z_n}/{z_1})\) we have

$$\begin{aligned} Q(\textbf{y})=0 \quad \ \forall \,\textbf{y} \in \hat{S}. \end{aligned}$$

Lemma 3.2 implies \(Q = \mathcal {O}\) and thus also \(P=\mathcal {O}\). \(\square \)

Let \(\mathcal {E}_{d,\gamma }\) be the class of all functions \(F:\mathbb {C}^n \rightarrow \mathbb {C}\) of the form

$$\begin{aligned} F(\textbf{z}) = \sum _{k=1}^m P_k(\textbf{z})\textrm{e}^{\textbf{v}_k \cdot \textbf{z}} , \end{aligned}$$

where the points \(\textbf{v}_k\) are distinct, \(k=1,\ldots ,m\), and the coefficients \(P_k(\textbf{z})\) are homogeneous rational functions of degree d for all \(k \in [m]\). We call these functions E-functions of degree d. Note that any linear combination of E-functions of degree d is again an E-function of degree d, i.e., these E-functions form a vector space.

Theorem 3.5

Let \(\varvec{\sigma }(O)\) be a significant part of an rp-hypersurface and let \(\gamma \) be a nonzero complex number. If F is an E-function and

$$\begin{aligned} F(\gamma \textbf{s}) = 0 \quad \ \forall \, \textbf{s} \in \varvec{\sigma }(O), \end{aligned}$$

then \(F=\mathcal {O}\).

Proof

The proof follows immediately from Theorem 3.3, Lemma 3.4 and the fact that with \(P_k(\textbf{z})\) also \(\hat{P}_k(\textbf{z})=P_k(\gamma \textbf{z})\) is a homogeneous rational function of degree d for all k. \(\square \)

4 Proof of Theorems 1.1 and 1.4

First let \(\mathcal {P}\) be a convex n-dimensional polytope in \(\mathbb {R}^n\). Let \(V_{\mathcal {P}}\) be its vertex set. Here we consider each vertex as an element \(\textbf{v}\) of \(\mathbb {R}^n\). Moreover let \(L_{\mathcal {P}}=\{\textbf{v}-\textbf{v}':\textbf{v}, \textbf{v}' \in V_{\mathcal {P}}\}\). From the theory on the exponential valuation of polytopes [2, 7, 17, 24] it is known that

$$\begin{aligned} F_{\mathcal {P}}(\textbf{z}) = \sum _{\textbf{v} \in V_{\mathcal {P}}} Q_{\mathcal {P},\textbf{v}}(\textbf{z})\textrm{e}^{\textbf{v} \cdot \textbf{z}}\ \quad \forall \, \textbf{z} \in \mathbb {C}^n \setminus Z_{\mathcal {P}}, \end{aligned}$$

where each \(Q_{\mathcal {P},\textbf{v}}(\textbf{z})\) is a rational function of the form

$$\begin{aligned} Q_{\mathcal {P},\textbf{v}}(\textbf{z})=\sum _{I \in \mathcal {I}_{\mathcal {P}}} \frac{\lambda _{\mathcal {P},\textbf{v},I}}{\prod _{\varvec{\ell } \in I} \varvec{\ell } \cdot \textbf{z}}, \end{aligned}$$

the numerators \(\lambda _{\mathcal {P},\textbf{v},I}\) are real numbers, \(\mathcal {I}_{\mathcal {P}}\) is a family of n-element linearly independent subsets of \(L_{\mathcal {P}}\) and \(Z_{\mathcal {P}}\) contains those \(\textbf{z}\) for which a term \(\varvec{\ell }\cdot \textbf{z}\) in the denominator is zero. Note that \(F_{\mathcal {P}}\) is an E-function of degree \(-n\).

Now let \(\mathcal {P}\) be a generalized n-dimensional polytope, i.e.,

$$\begin{aligned} \mathcal {P}=\mathcal {P}_1\cup \cdots \cup \mathcal {P}_m \end{aligned}$$

with convex n-dimensional polytopes \(\mathcal {P}_1,\ldots ,\mathcal {P}_m\) in \(\mathbb {R}^n\). For \(J \subseteq [m]\) let

$$\begin{aligned} \mathcal {P}_J=\bigcap _{j \in J} \mathcal {P}_j \end{aligned}$$

and let \(\mathcal {J}=\{J \subseteq [m]: \mathcal {P}_J \text { is n-dimensional}\}\). Recall that the characteristic function \(\chi _T\) of a subset T of \(\mathbb {R}^n\) is defined by

$$\begin{aligned} \chi _T(\textbf{x}) ={\left\{ \begin{array}{ll}1&{}\text {if } \textbf{x} \in T,\\ 0&{}\text {otherwise}.\end{array}\right. } \end{aligned}$$

It is well known that \(1-\chi _{\mathcal {P}_1\cup \cdots \cup \mathcal {P}_m} = \prod _{j=1}^m (1-\chi _{\mathcal {P}_j})\) and \(\chi _{\bigcap _{j\in J}\mathcal {P}_j}=\prod _{j\in J}\chi _{\mathcal {P}_j}\) which leads to the inclusion-exclusion formula

$$\begin{aligned} \chi _{\mathcal {P}_1\cup \cdots \cup \mathcal {P}_m}\,=\!\sum _{\emptyset \ne J \subseteq [m]} \!(-1)^{|J|+1} \chi _{\mathcal {P}_J}. \end{aligned}$$

Integration over \(\mathbb {R}^n\) and omission of zero terms yields

$$\begin{aligned} F_{\mathcal {P}}(\textbf{z})=\sum _{J \in \mathcal {J}} (-1)^{|J|+1}F_{\mathcal {P}_J}(\textbf{z}), \end{aligned}$$

i.e., a linear combination of E-functions of degree \(-n\), which is again an E-function of degree \(-n\).

For the proof of Theorem 1.1 let \(\mathcal {P}_1\) and \(\mathcal {P}_2\) be the two generalized n-dimensional polytopes given in the theorem. By assumption,

$$\begin{aligned} F_{\mathcal {P}_1}(\gamma \textbf{s}) - F_{\mathcal {P}_2}(\gamma \textbf{s})=0\quad \ \forall \, \textbf{s} \in \varvec{\sigma }(O). \end{aligned}$$

Since \(F_{\mathcal {P}_1}-F_{\mathcal {P}_2}\) is an E-function of degree \(-n\), Theorem 3.5 implies

$$\begin{aligned} F_{\mathcal {P}_1}-F_{\mathcal {P}_2}=\mathcal {O}. \end{aligned}$$

Since the Fourier–Laplace transform is obviously continuous in \(\textbf{z}\) we finally obtain

$$\begin{aligned} F_{\mathcal {P}_1}(\textbf{z})-F_{\mathcal {P}_2}(\textbf{z}) =0 \quad \ \forall \,\textbf{z} \in \mathbb {C}^n. \end{aligned}$$

The injectivity of the Fourier(–Laplace) transform, see e.g. [21], implies \(\mathcal {P}_1=\mathcal {P}_2\). In the same way, for a generalized n-dimensional polytope \(\mathcal {P}\),

$$\begin{aligned} F_{\mathcal {P}}(\gamma \textbf{s}) =0\ \quad \forall \,\textbf{s} \in \varvec{\sigma }(O) \end{aligned}$$

implies

$$\begin{aligned} F_{\mathcal {P}}(\textbf{z})=0 \quad \ \forall \, \textbf{z} \in \mathbb {C}^n, \end{aligned}$$

and hence \(\mathcal {P}=\emptyset \) which proves Theorem 1.4.

5 Proof of Theorem 1.2

First we study the inner point condition more generally. In Sect. 1 we mentioned already that (2) implies the inner point condition. But the explicit computation of the determinant \(\det ({\partial \hat{\varvec{\sigma }}(\textbf{t})}/{\partial \textbf{t}})\) is often difficult and thus we need some further sufficient conditions for (2). Let

$$\begin{aligned} \overline{\textbf{s}}=\begin{pmatrix}s_1\\ \vdots \\ s_{n-1}\end{pmatrix}\quad \text { and }\quad \overline{\varvec{\sigma }}=\begin{pmatrix}\sigma _1\\ \vdots \\ \sigma _{n-1}\end{pmatrix}. \end{aligned}$$

In the following we will assume that there is some \(\textbf{t}\in O\) such that

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}\ne 0 \end{aligned}$$
(14)

and discuss this assumption in detail at the end of this section. Replacing O by an adequate open subset, by the Implicit Function Theorem, we may even assume that (14) holds for all \(\textbf{t} \in O\) and that \(\sigma \) maps O bijectively onto \(\overline{O}=\overline{\varvec{\sigma }}(O)\). With the notation \(f(\overline{\textbf{s}})=\sigma _n(\overline{\varvec{\sigma }}^{-1}(\overline{\textbf{s}}))\) we have for \(\overline{\textbf{s}}=\overline{\varvec{\sigma }}(\textbf{t})\)

$$\begin{aligned} \sigma _n(\textbf{t}) = f(\overline{\textbf{s}}). \end{aligned}$$

Let finally

$$\begin{aligned} \varvec{\psi }(\overline{\textbf{s}})=\begin{pmatrix}s_2/s_1\\ \vdots \\ s_{n-1}/s_1\\ \,f(s_1,\ldots ,s_{n-1})/s_1\,\end{pmatrix}. \end{aligned}$$

Note that

$$\begin{aligned} \hat{\varvec{\sigma }}(\textbf{t}) = \varvec{\psi }(\overline{\varvec{\sigma }}(\textbf{t}))\quad \ \forall \, \textbf{t} \in O. \end{aligned}$$

Moreover,

$$\begin{aligned} \det \frac{\partial \hat{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=\det \frac{\partial \varvec{\psi }(\overline{\textbf{s}})}{\partial \overline{\textbf{s}}}\cdot \det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}. \end{aligned}$$

Thus, under the assumption (14) we only need that for \(\overline{\textbf{s}}=\overline{\varvec{\sigma }}(\textbf{t})\)

$$\begin{aligned} \det \frac{\partial \varvec{\psi }(\overline{\textbf{s}})}{\partial \overline{\textbf{s}}}\ne 0. \end{aligned}$$
(15)

It is easy to check that

$$\begin{aligned} \det \frac{\partial \varvec{\psi }(\overline{\textbf{s}})}{\partial \overline{\textbf{s}}}= \frac{1}{s_1^n}( \overline{\textbf{s}}\cdot \nabla f(\overline{\textbf{s}}) -f(\overline{\textbf{s}})). \end{aligned}$$

Let \(I_{\overline{\textbf{s}}}=\{\lambda : \lambda \overline{\textbf{s}}\in \overline{O}\}\) and let \(\varphi :I_{\overline{\textbf{s}}} \rightarrow \mathbb {R}\) be defined by

$$\begin{aligned} \varphi (\lambda )=f(\lambda \overline{\textbf{s}}). \end{aligned}$$

Then for \(\overline{\textbf{s}}\in \overline{O}\) the following conditions are equivalent for \(\lambda \in I_{\overline{\textbf{s}}}\):

$$\begin{aligned} \lambda \overline{\textbf{s}}\cdot \nabla f(\lambda \overline{\textbf{s}}) -f(\lambda \overline{\textbf{s}})&=0,\\ \lambda \varphi '(\lambda )- \varphi (\lambda )&= 0,\\ \varphi (\lambda )&=c \lambda \ \text { for some } c \in \mathbb {R},\\ f(\lambda \overline{\textbf{s}})&=\lambda f(\overline{\textbf{s}}). \end{aligned}$$

Hence we already proved the following theorem:

Theorem 5.1

The inner point condition is fulfilled for the open set \(O \subseteq D\setminus \sigma _1^{-1}(0)\) if there is some \(\textbf{t} \in O\) such that (14) holds and with the point

$$\begin{aligned} \textbf{s}=\begin{pmatrix}\overline{\textbf{s}}\\ f(\overline{\textbf{s}})\end{pmatrix} \in \mathcal {S}\end{aligned}$$

not all points \(\lambda \textbf{s}\) belong to \(\mathcal {S}\) if \(\lambda \) runs through a sufficiently small neighborhood of 1.

Lemma 5.2

Let \(\mathcal {S}= \{ \textbf{s}\in \mathbb {R}^n:(1/2)\textbf{s}^{\varvec{T}}A \textbf{s} + \textbf{b}^{\varvec{T}}\textbf{s} + c = 0\}\) be a quadric hypersurface that does not contain a line but at least two points. Then \({\textrm{rk}}(A|\textbf{b})=n\) and \(\mathcal {S}\) is an rp-hypersurface, possibly up to an exceptional set of hypersurface measure zero.

Proof

Assume that \({\textrm{rk}}(A|\textbf{b})<n\). Then there exists some \(\textbf{r}\ne \textbf{0}\) in \(\mathbb {R}^n\) such that \(A \textbf{r}=\textbf{0}\) and \(\textbf{b}^{\varvec{T}}\textbf{r} = 0\). It follows that with \(\textbf{s} \in \mathcal {S}\) the whole line \(\{\textbf{s}+\lambda \textbf{r}: \lambda \in \mathbb {R}\}\) belongs to \(\mathcal {S}\), a contradiction.

From linear algebra it is well known that there exists an affine transformation \(\textbf{s}=T\textbf{s}'+\textbf{v}\) with a regular matrix T and a shift vector \(\textbf{v}\) such that one of the following normal forms can be generated:

  • Case 1: \({\textrm{rk}}(A)=n-1\).

    $$\begin{aligned} \sum _{j=1}^{n-1} \epsilon _j s_j'^2=s_{n}',\quad \text { where }\ \forall \,j:\epsilon _j \in \{-1,1\}. \end{aligned}$$
    (16)
  • Case 2: \({\textrm{rk}}(A)=n\) and not all eigenvalues of A have the same sign.

    $$\begin{aligned} s_1's_n'+\sum _{j=2}^{n-1} \epsilon _j s_j'^2=c'\ne 0,\quad \text { where }\ \forall \,j:\epsilon _j \in \{-1,1\}. \end{aligned}$$
    (17)

    Note that the line-condition implies \(c'\ne 0\) because otherwise \(\mathcal {S}\) would contain the line \(s_2'=\ldots =s_n'=0\).

  • Case 3: \({\textrm{rk}}(A)=n\) and all eigenvalues of A have the same sign.

    $$\begin{aligned} \sum _{j=1}^{n} s_j'^2=1. \end{aligned}$$
    (18)

In Cases 1 and 2 we can take the parameters \(t_j=s_j'\), \(j \in [n-1]\), and thus \(\textbf{s}'\) is a function of \(\textbf{t}\) which we write in the form \(\textbf{s}'=\varvec{\sigma }'(\textbf{t})\). This leads to \(\varvec{\sigma }(\textbf{t}) = T \varvec{\sigma }'(\textbf{t}) + \textbf{v}\). In Case 3 we use spherical coordinates:

$$\begin{aligned} s_1'&=\cos \varphi _1,\\ s_2'&=\sin \varphi _1\cos \varphi _2,\\ s_3'&=\sin \varphi _1\sin \varphi _2\cos (\varphi _3),\\ \vdots \\ s_{n-1}'&=\sin \varphi _1\cdots \sin \varphi _{n-2}\cos \varphi _{n-1},\\ s_{n}'&=\sin \varphi _1\cdots \sin \varphi _{n-2}\sin \varphi _{n-1}, \end{aligned}$$

where \(0 \le \varphi _j \le \pi \) for all \(j \in [n-2]\) and \(-\pi < \varphi _{n-1} \le \pi \). Thus \(\textbf{s}'\) is a function of \(\varvec{\varphi }\) which we write in the form \(\textbf{s}'=\varvec{\tau }(\varvec{\varphi })\). It is well known that for

$$\begin{aligned} t_j=\tan \frac{\varphi _j}{2}\ \text { and }\ \varphi _j \ne \pi ,\ \quad j \in [n-1], \end{aligned}$$

i.e.,

$$\begin{aligned} t_j\in \mathbb {R}_{\ge 0}, \ \ j \in [n-1],\qquad t_{n-1} \in \mathbb {R},\\ \cos \varphi _j=\frac{1-t_j^2}{1+t_j^2},\qquad \sin \varphi _j=\frac{2 t_j}{1+t_j^2}, \end{aligned}$$

which shows that the components of \(\textbf{s}'\) and consequently also of \(\textbf{s}\) are rational functions of \(t_1,\ldots ,t_{n-1}\), up to the cases where one of the involved angles equals \(\pi \).

If we use the notation

then we have \(\varvec{\sigma }(\textbf{t}) = T \varvec{\sigma }'(\textbf{t}) + \textbf{v}\). \(\square \)

Note that we have \(D=\mathbb {R}^{n-1}\) in Cases 1 and 3 and \(D=\mathbb {R}_{\ne 0}\times \mathbb {R}^{n-2}\) in Case 2.

We further restrict \(\varvec{\sigma }'\) to the first \(n-1\) components and use the notation

$$\begin{aligned} \overline{\textbf{s}}'=\overline{\varvec{\sigma }}'(\textbf{t}) = \begin{pmatrix}\sigma _1'(\textbf{t})\\ \vdots \\ \sigma _{n-1}'(\textbf{t})\end{pmatrix}. \end{aligned}$$

Lemma 5.3

Let \(\mathcal {S}\) be a quadric hypersurface of the form of Lemma 5.2 and let \(O \subseteq D\setminus \sigma _1^{-1}(0)\) be an open set. Then there is some \(\textbf{t} \in O\) such that

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}'(\textbf{t})}{\partial \textbf{t}}\ne 0. \end{aligned}$$
(19)

Proof

We study the same cases as in the proof of Lemma 5.2. In Cases 1 and 2 the Jacobian \({\partial \overline{\varvec{\sigma }}'(\textbf{t})}/{\partial \textbf{t}}\) is the identity matrix. In Case 3 we have (with )

Here the second matrix is obviously regular and for the first matrix it is easy to see that

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}'}{\partial \varvec{\varphi }}=(-1)^{n-1} \prod _{j=1}^{n-1} \sin ^{n-j}\varphi _j \end{aligned}$$

which cannot be identically zero if \(\textbf{t}\), i.e., also \(\varvec{\varphi }\), runs through an open set in  \(\mathbb {R}^{n-1}\). \(\square \)

Lemma 5.4

Let \(\mathcal {S}\) be a quadric hypersurface of the form of Lemma 5.2 and let \(O \subseteq D\setminus \sigma _1^{-1}(0)\) be an open set. If there is some \(\textbf{t} \in O\) such that

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}'(\textbf{t})}{\partial \textbf{t}}\ne 0, \end{aligned}$$
(20)

then O satisfies the hyperplane condition.

Proof

The condition (19) implies that \(\overline{\varvec{\sigma }}'(O)\) contains an open set. Now the normal forms (16)–(18) imply that \(\varvec{\sigma }'(O)\)—and hence also \(\varvec{\sigma }(O)\)—cannot be contained in a hyperplane because the corresponding normalized gradient is not constant. \(\square \)

Proof of Theorem 1.2

By Lemmas 5.3 and 5.4 the open set O satisfies the hyperplane condition. Moreover, with \(\textbf{s} \in \mathcal {S}\) not all points \(\lambda \textbf{s}\) belong to \(\mathcal {S}\) if \(\lambda \) runs through a sufficiently small neighborhood of 1. Indeed, if this would not be the case, then \(\mathcal {S}\) would contain the whole line \(\{\lambda \textbf{s}: \lambda \in \mathbb {R}\}\) which is excluded. In view of Theorem 5.1 it remains to check (14). Note that \(\det ({\partial \overline{\varvec{\sigma }}(\textbf{t})}/{\partial \textbf{t}})\) is a rational function of \(\textbf{t}\). Lemma 3.2 implies that if this determinant (multiplied by the main denominator) would be zero for all \(\textbf{t}\in O\), then it would be zero for all \(\textbf{t} \in D\). Thus it is sufficient to prove that

$$\begin{aligned} \exists \, \textbf{t} \in D: \quad \det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}\ne 0. \end{aligned}$$
(21)

Note that

$$\begin{aligned} \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=\frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{s}'}\cdot \frac{\partial \varvec{s}'}{\partial \textbf{t}}=\overline{T}\frac{\partial \varvec{s}'}{\partial \textbf{t}}, \end{aligned}$$
(22)

where \(\overline{T}\) is the minor of T which can be obtained from T by deleting the last row. Let \(\overline{\textbf{T}}_j\) be its j-th column. Analogously, deleting the last component of \(\textbf{v}\) gives the vector \(\overline{\textbf{v}}\). Further let

$$\begin{aligned} \hat{\overline{T}}_j&=(\overline{\textbf{T}}_1|\ldots |\overline{\textbf{T}}_{j-1}|\overline{\textbf{T}}_n|\overline{\textbf{T}}_{j+1}|\ldots |\overline{\textbf{T}}_{n-1}), \qquad j \in [n-1],\\ \hat{\overline{T}}_n&=(\overline{\textbf{T}}_1|\ldots |\overline{\textbf{T}}_{n-1}). \end{aligned}$$

Since the transformation matrix T is regular we have \({\textrm{rk}}(\overline{T})=n-1\) and hence there is some \(j^* \in [n]\) such that

$$\begin{aligned} \det (\hat{\overline{T}}_{j^*})\ne 0. \end{aligned}$$
(23)

The chain rule (22) implies

$$\begin{aligned} \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}} = \sum _{j=1}^{n}\overline{\textbf{T}}_j \nabla (\varvec{\sigma }_j'(\textbf{t}))^{\varvec{T}}. \end{aligned}$$
(24)

Now we distinguish between the cases in Lemma 5.2.

Cases 1 and 2. Since \(\varvec{\sigma }_j'(\textbf{t})=t_j\) for \(j \in [n-1]\), we have

$$\begin{aligned} \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=(\textbf{T}_1|\ldots |\textbf{T}_{n-1})+\overline{\textbf{T}}_n\nabla (\varvec{\sigma }_n'(\textbf{t}))^{\varvec{T}}. \end{aligned}$$

In the following we use (23) several times.

Case 1. Since \(\varvec{\sigma }_n'(\textbf{t})=\sum _{j=1}^{n-1} \epsilon _j t_j^2\),

$$\begin{aligned} \nabla (\varvec{\sigma }_n'(\textbf{t})) = 2 \begin{pmatrix}\epsilon _1 t_1\\ \vdots \\ \epsilon _{n-1}t_{n-1} \end{pmatrix} \end{aligned}$$

which gives

$$\begin{aligned} \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{ \partial \textbf{t}}=(\overline{\textbf{T}}_j+2\epsilon _j t_j \overline{\textbf{T}}_n)_{j \in [n-1]}. \end{aligned}$$

If \(j^*=n\), then we take \(\textbf{t} = \textbf{0}\) and have at this point

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}} = \det (\hat{\overline{T}}_{n}) \ne 0. \end{aligned}$$

If \(j^*<n\), then we take \(t_j=0\) for \(j \in [n-1] \setminus \{j^*\}\) and \(t_{j^*}= \lambda \) and have for these points that

$$\begin{aligned} \lim _{\lambda \rightarrow \infty } \frac{1}{\lambda }\det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=2\epsilon _{j^*}\det (\hat{\overline{T}}_{j^*})\ne 0. \end{aligned}$$

Thus (21) is proved in this case.

Case 2. Since \(\varvec{\sigma }_n'(\textbf{t})=\bigl (c'-\sum _{j=2}^{n-1} \epsilon _j t_j^2\bigr )/t_1\) we have

$$\begin{aligned} \nabla (\varvec{\sigma }_n'(\textbf{t})) = -\frac{1}{t_1} \begin{pmatrix} \sigma _n'(\textbf{t})\\ 2 \epsilon _2 t_2\\ \vdots \\ 2 \epsilon _{n-1}t_{n-1}\end{pmatrix}. \end{aligned}$$

We take \(t_j=0\) for \(j \in \{2,\ldots ,n-1\}\) and \(t_{1}= \lambda \) and have for these points

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=\det (\hat{\overline{T}}_{n}) - \frac{c'}{\lambda ^2} \det (\hat{\overline{T}}_{1}). \end{aligned}$$

If \(j^*=n\), then we have

$$\begin{aligned} \lim _{\lambda \rightarrow \infty }\det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=\det (\hat{\overline{T}}_{n})\ne 0, \end{aligned}$$

and if \(j^*=1\), then we have

$$\begin{aligned} \lim _{\lambda \rightarrow 0}\lambda ^2\det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=- c'\det (\hat{\overline{T}}_{1})\ne 0. \end{aligned}$$

Thus we may assume that \(\det (\hat{\overline{T}}_{1}) = \det (\hat{\overline{T}}_{n}) =0\) and that \(1< j^* < n\). Here we take \(t_{j^*}=\sqrt{c'/{\epsilon _j}}\) (possibly in \(\mathbb {C}\)), \(t_1=1\), and \(t_j=0\) for \(j \in [n]\setminus \{1,j^*\}\). Then

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=\det (\hat{\overline{T}}_{n})-2\epsilon _{j^*}t_{j^*}\det (\hat{\overline{T}}_{j^*})\ne 0. \end{aligned}$$

Thus (21) is proved also in this case.

Case 3. We have (with )

and consequently by ( 22)

$$\begin{aligned} \det \frac{\partial \overline{\varvec{\sigma }}(\textbf{t})}{\partial \textbf{t}}=\det \biggl (\overline{T}\frac{\partial \textbf{s}'}{\partial \varvec{\varphi }}\biggr )\cdot \det \frac{\partial \varvec{\varphi }}{\partial \textbf{t}} \end{aligned}$$

and thus it is sufficient to prove

$$\begin{aligned} \exists \,\varvec{\varphi } \in [0,\pi )^{n-2} \times (-\pi ,\pi ): \quad \det \biggl (\overline{T}\frac{\partial \textbf{s}'}{\partial \varvec{\varphi }}\biggr ) \ne 0. \end{aligned}$$
(25)

With the notation

$$\begin{aligned} \textbf{a}(\varphi _1) =\begin{pmatrix}-\sin \varphi _1\\ \cos \varphi _1\end{pmatrix} \end{aligned}$$

and, for \(n>2\),

$$\begin{aligned} \textbf{a}(\varphi _1,\ldots ,\varphi _{n-1}) =\begin{pmatrix}-\sin \varphi _1\\ \cos \varphi _1\cos \varphi _2\\ \cos \varphi _1\sin \varphi _2\cos \varphi _3\\ \vdots \\ \cos \varphi _1\sin \varphi _2\sin \varphi _3\ldots \sin \varphi _{n-2}\cos \varphi _{n-1}\\ \cos \varphi _1\sin \varphi _2\sin \varphi _3 \ldots \sin \varphi _{n-2} \sin \varphi _{n-1}\end{pmatrix} \end{aligned}$$

we have

$$\begin{aligned} \frac{\partial \textbf{s}'}{\partial \varvec{\varphi }} = \prod _{j=1}^{n-2} \sin ^{n-j-1}\varphi _j\begin{pmatrix}\textbf{a}(\varphi _1,\ldots ,\varphi _{n-1})&{}0&{}\dots &{}0&{}0\\ &{}\textbf{a}(\varphi _2,\ldots ,\varphi _{n-1})&{}\dots &{}0&{}0\\ {} &{}&{}\dots &{}&{}\\ {} &{}&{}&{}\textbf{a}(\varphi _{n-2},\varphi _{n-1})&{}0\\ {} &{}&{}&{}&{}\textbf{a}(\varphi _{n-1}) \end{pmatrix}. \end{aligned}$$

Note that

$$\begin{aligned} \textbf{a}\biggl (\frac{\pi }{2},\ldots \biggr ) =\begin{pmatrix}-1\\ 0\\ \vdots \\ 0\\ 0\end{pmatrix}\quad \text {and}\quad \textbf{a}\biggl (\frac{\pi }{4},\frac{\pi }{2},\ldots ,\frac{\pi }{2}\biggr ) =\begin{pmatrix}-\sqrt{2}/2\\ 0\\ \vdots \\ 0\\ \sqrt{2}/2\end{pmatrix}. \end{aligned}$$

If \(j^*=n\), then we take \(\varphi _1=\ldots =\varphi _{n-1}={\pi }/{2}\) and obtain for this point

$$\begin{aligned} \det \biggl (\overline{T}\frac{\partial \textbf{s}'}{\partial \varvec{\varphi }}\biggr ) = \det (-\hat{\overline{T}}_{n})\ne 0. \end{aligned}$$

Thus we may assume that \(1 \le j^* < n\) and \(\det (\hat{\overline{T}}_{n}) = 0\). Now we take \(\varphi _1=\ldots =\varphi _{j^*-1}={\pi }/{2}\), \(\varphi _{j^*}={\pi }/{4}\), \(\varphi _{j^*+1}=\ldots =\varphi _{n-1}=\pi /2\), and obtain

$$\begin{aligned} \det \biggl (\overline{T}\frac{\partial \textbf{s}'}{\partial \varvec{\varphi }}\biggr )=\frac{\sqrt{2}}{2}\bigl (\det (-\hat{\overline{T}}_{n})+(-1)^n\det (\hat{\overline{T}}_{j^*}) \bigr )\ne 0. \end{aligned}$$

Thus (21) is proved also in this final case. \(\square \)

6 Open Problems

With the general definition of the Fourier–Laplace transform of an \(f:\mathbb {R}^n \rightarrow \mathbb {R}\),

$$\begin{aligned} F_{f}(\textbf{z})=\int _{\mathbb {R}^n} f(\textbf{z})\textrm{e}^{\textbf{z}\cdot \textbf{x}} \,\textbf{dx}, \end{aligned}$$

Theorem 1.1 is a result on characteristic functions \(f=\chi _{\mathcal {P}}\) of generalized n-dimensional polytopes \(\mathcal {P}\) in \(\mathbb {R}^n\). This result can be easily generalized to linear combinations of such functions, i.e., \(f=\sum _{j=1}^m \lambda _j\chi _{\mathcal {P}_j}\), because their Fourier–Laplace transform is also an E-function.

Problem 6.1

Generalize Theorem 1.1 to a larger class of functions \(f:\mathbb {R}^n \rightarrow \mathbb {R}\).

Problem 6.2

Let \(\mathcal {S}=\{\varvec{\sigma }(\textbf{t}):\textbf{t}\in D\}\) be an rp-hypersurface, \(\mathcal {P}_1\) and \(\mathcal {P}_2\) be generalized n-dimensional polytopes in \(\mathbb {R}^n\), and \(\gamma \) be a nonzero complex number. Is there a finite set \(T\subseteq D\) such that \(F_{\mathcal {P}_1}(\gamma \varvec{\sigma }(\textbf{t}))=F_{\mathcal {P}_2}(\gamma \varvec{\sigma }(\textbf{t}))\) for all \(\textbf{t}\in T\) already implies \(\mathcal {P}_1=\mathcal {P}_2\), at least if the number of faces of \(\mathcal {P}_1\) and \(\mathcal {P}_2\) is bounded by a constant?