1 Introduction

More than hundred years ago Stéphanos [14], Levi-Cività [7] and Stäkel [13] have shown that the only differentiable functions on \({\mathbb {R}}\) satisfying the equation

$$\begin{aligned} f(x+y) = \sum _{k=1}^nu_i(x)v_i(y), \end{aligned}$$
(1.1)

with some \(u_i,v_i\), are exponential polynomials, that is the functions of the form

$$\begin{aligned} f(x) = \sum _k p_k(x)e^{\lambda _k x}, \end{aligned}$$

where \(p_k\) are polynomials, \(\lambda _k \in {\mathbb {C}}\). This result was subsequently extended to continuous and measurable functions on \({\mathbb {R}}\), then to functions on \({\mathbb {R}}^n\) (see the book by Aczél [1]), a paper of Székelyhidi [16] on Abelian groups (see also [17] and [18] by the same author), and to (continuous) functions on arbitrary (topological) semigroups by the second author [10] (with the change of exponential polynomials by matrix functions of finite-dimensional (continuous) representations). There are many interesting and important equations related to (1.1), for example the extended d’Alembert equation

$$\begin{aligned} f(x+y) + f(x-y) = \sum _{k=1}^nu_k(x)v_k(y) \end{aligned}$$

considered by Rukhin in [9, 19] on 2-divisible Abelian groups and Penney and Rukhin in [8] on some classes of noncommutative groups, or the equation

$$\begin{aligned} \sum _{i=1}^ng_i(y)f_i(xy) = \sum _{k=1}^nu_k(x)v_k(y), \end{aligned}$$
(1.2)

for functions on arbitrary groups by the second author [12]. A reader can find many information on this subject in the book of Stetkaer [15].

In [2] Almira and the second author considered the following class of equations, restricting by continuous functions on \({\mathbb {R}}^d\):

$$\begin{aligned} \sum _{j=1}^mf_j(a_jx+c_jy)= \sum _{k=1}^nu_k(x)v_k(y) \text { for all } x,y\in {\mathbb {R}}^d, \end{aligned}$$
(1.3)

where \(a_j, c_j\) are \(d\times d\)-matrices. This class includes various extensions of (linearized) Skitovich-Darmois equation and other equations related to the study of probability distributions (see Feldman’s [4]). It was proved in [2] that if all matrices \(a_j\), \(b_j\), for \(1\leqslant j\leqslant m\), and matrices \(d_{i,j}:=a_ic_j-a_jc\), for \(i \ne j\), are invertible then each function \(f_j\) in (1.3) is an exponential polynomial. The extension of this statement to commutative hypergroups were obtained by Fechner and Székelyhidi in [3].

Let us also note that a characterization of exponential polynomials was provided by Gajda in [5]. The author introduced a class of functional equations whose solutions, in an arbitrary Abelian group, were exactly of the form \(f=\sum _{i}m_i p_i.\) Here \(m_i\) are exponential functions and \(p_i\) are polynomials.

Here we make one more step adding polynomial coefficients to summands in (1.3):

$$\begin{aligned} \sum _{j=1}^M P_j(x,y) f_j(a_jx + c_j y) = \sum _{k=1}^n u_k(x)v_k(y). \end{aligned}$$
(1.4)

It should be noted that a much more general equation for functions on \({\mathbb {R}}\) was considered by Światak [20]:

$$\begin{aligned} \sum _{j=1}^M \phi _j(x,y) f_j(x + c_j(y)) = b(x,y), \end{aligned}$$
(1.5)

where \(\phi _j\), \(c_j\) and b are continuous functions. In so general setting the solutions hardly can be found in an exact form; in [20] the results were obtained about their analytical properties. On the other hand Laczkovich [6] studied the equation

$$\begin{aligned} \sum _{j=1}^M \phi _j(y) f_j(x + c_j(y)) = b(y), \end{aligned}$$
(1.6)

and proved that under some mild restrictions on functions \(\phi _j, c_j\) and b the solutions \(f_j\) are exponential polynomials. Unexpectedly, the Eq. (1.4), possessing at the first glance much less degree of freedom, has a more wide class of solutions: they are the combinations of exponentials with rational coefficients.

To state the result in the exact form, let us denote by \(M_d({\mathbb {R}})\) the algebra of all \(d\times d\)-matrices with real entries, and by Inv the group of all invertible matrices in \(M_d({\mathbb {R}})\).

Theorem 1.1

Let \(a_1,\ldots ,a_M, c_1,\ldots ,c_M \in M_d({\mathbb {R}})\) satisfy the conditions

$$\begin{aligned} a_i, c_i\in Inv \text { and } a_i^{-1}c_i - a_j^{-1}c_j \in Inv, \text { for }i\ne j. \end{aligned}$$
(1.7)

If continuous functions \(f_{j}:{\mathbb {R}}^d \rightarrow {\mathbb {C}}\) satisfy the functional Eq. (1.4)

$$\begin{aligned} \sum _{j=1}^M f_j(a_jx + c_j y) P_j(x,y) = \sum _{k=1}^n u_k(x)v_k(y), \end{aligned}$$

with some polynomials \(P_{j}\) and some continuous functions \({u_k}, {v_k}\), then each \(f_j\) is a ratio of an exponential polynomial and a polynomial. In other words

$$\begin{aligned} f_j(x) = \sum _{m=1}^n e^{\langle {\lambda _m},x\rangle } r_m(x), \end{aligned}$$

where all \(r_m\) are rational functions.

In fact we prove a more general result which can be considered as a version of Theorem 1.1 for vector-valued functions. Let us say that scalar-valued functions \(\ \varphi _1, \ldots , \varphi _N\ \) on \({\mathbb {R}}^d\) are polynomially dependent up to an exponential polynomial, if there exists a non-trivial (= non-zero) N-tuple of polynomials \(\ p_1, \ldots , p_N\ \) such that \(p_1\varphi _1 + \cdots + p_N\varphi _N\) is an exponential polynomial.

Theorem 1.2

Let continuous functions \(f_{ji}\) (\(1\leqslant j\leqslant n, 1\leqslant i\leqslant r_j\)) on \({\mathbb {R}}^d\) satisfy the equation

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^{r_1} f_{1i}(a_1x+c_1y)P_{1i}(x,y) + \cdots + \sum _{i=1}^{r_n} f_{ni}(a_nx+c_ny)P_{ni}(x,y) \\&\quad = \sum _{j=1}^M u_j(x)v_j(y), \end{aligned} \end{aligned}$$
(1.8)

where \(P_{ji}\) are polynomials on \({\mathbb {R}}^d\times {\mathbb {R}}^d\), and the matrices \(a_i\), \(c_i\) satisfy (1.7).

Then each family \(f_{k1}, \ldots , f_{kr_{k}}\) is polynomially dependent up to an exponential polynomial.

It is clear that Theorem 1.1 is a direct corollary of Theorem 1.2: a family that consists of one function f is polynomially dependent up to an exponential polynomial if and only if f is a ratio of an exponential polynomial and a polynomial. It should be added that apart of generality (in fact because of it) Theorem 1.2 is better adapted for the proof than Theorem 1.1.

2 Notations and Preliminary Results

We consider functions defined on \({\mathbb {R}}^d\), so symbols \(x,y,t,\ldots \) denote elements of \({\mathbb {R}}^d\).

By capital letters \(P,Q,F, \ldots \) we mostly denote the functions on \({\mathbb {R}}^d\times {\mathbb {R}}^d\) while for functions on \({\mathbb {R}}^d\) small letters are used.

Let (EP) be the set of all exponential polynomials on \({\mathbb {R}}^d\). Let also \(\Lambda \) denote the space of all finite sums of products u(x)v(y), where \(u,v\in C({\mathbb {R}}^d)\).

For \(t\in {\mathbb {R}}^d\), we denote by \(R_t\) the shift-operator on \(C({\mathbb {R}}^d)\):

$$\begin{aligned} R_t f(x) = f(x+t), \end{aligned}$$

and set

$$\begin{aligned} \Delta _t f = R_t f -f. \end{aligned}$$

Furthermore, with any pair \((a,t)\in M_d({\mathbb {R}})\times {\mathbb {R}}^d\), we relate an operator \(\delta _{a,t}\) on \(C({\mathbb {R}}^d\times {\mathbb {R}}^d)\) as follows:

$$\begin{aligned} \delta _{a,t}w(x,y) = w(x+at, y-t) - w(x,y). \end{aligned}$$

Then it is easy to check that for arbitrary function \(g\in C({\mathbb {R}}^d)\) and a polynomial q, one has

$$\begin{aligned} \delta _{a,t}(g(x+ay)q(y)) =g(x+ay)\Delta _{-t}q(y) \end{aligned}$$
(2.1)

and, more generally,

$$\begin{aligned} \delta _{a,t}(g(x+by)q(y)) =g(x+by)\cdot \Delta _{-t}q(y) + (\Delta _{(a-b)t}g)(x+by)\cdot R_{-t}q(y). \end{aligned}$$
(2.2)

Indeed

$$\begin{aligned}{} & {} \delta _{a,t}(g(x+by)q(y)) = g(x+at+b(y-t))q(y-t)-g(x+by)q(y)\\{} & {} \quad = g(x+at+b(y-t))q(y-t)-g(x+by)q(y-t) \\{} & {} \qquad + g(x+by)(q(y-t)- q(y)) = (g(x+by+ (a-b)t)-g(x+by))q(y-t) \\{} & {} \qquad + g(x+by)(q(y-t) - q(y)) = (\Delta _{(a-b)t}g)(x+by)\cdot q(y-t) \\{} & {} \qquad + g(x+by)\cdot \Delta _{-t}q(y). \end{aligned}$$

In particular,

$$\begin{aligned} \delta _{a,t}(g(x+by)) = (\Delta _{(a-b)t}g)(x+by). \end{aligned}$$
(2.3)

We will need the following result which is a special case of [11, Lemma 2]:

Proposition 2.1

Let L be a finite-dimensional subspace of \(C({\mathbb {R}}^d)\) and let \(f \in C({\mathbb {R}}^d)\). Suppose that for any \(\ y\in {\mathbb {R}}^d\), there is a finite-dimensional shift-invariant subspace \(L(y)\subset C({\mathbb {R}}^d)\) with

$$\begin{aligned} R_yf \in L+L(y). \end{aligned}$$
(2.4)

Then \(f \in (EP)\).

Corollary 2.1

If, for some \(\ n \in {\mathbb {N}}\), a continuous function \(\ f:{\mathbb {R}}^d \rightarrow {\mathbb {C}}\), satisfies the condition

$$\begin{aligned} \Delta _{t_n} \cdots \Delta _{t_1} f(x) \in (EP)\, \text { for all } t_1, \ldots , t_n \in {\mathbb {R}}^d, \end{aligned}$$
(2.5)

then \(f \in (EP)\).

Proof

Fixing \(t_1, \ldots , t_{n-1}\) and denoting \(\Delta _{t_{n-1}} \cdots \Delta _{t_1} f\) by g, we obtain the following relation for g:

$$\begin{aligned} g(x+t_n) - g(x) \in (EP), \end{aligned}$$
(2.6)

for all \(t_n \in {\mathbb {R}}^d.\)

Since each exponential polynomial belongs to a finite-dimensional shift-invariant subspace we obtain

$$\begin{aligned} R_{t}g \in {\mathbb {C}} g + L(t), \end{aligned}$$
(2.7)

for all \(t \in {\mathbb {R}}^d,\) where L(t) is a finite-dimensional shift-invariant subspace. Applying Proposition 2.1 we conclude that g is an exponential polynomial. Thus

$$\begin{aligned} \Delta _{t_{n-1}} \cdots \Delta _{t_1} f(x) \in (EP), \text { for all } t_1, \ldots , t_{n-1} \in {\mathbb {R}}^d. \end{aligned}$$

Repeating these arguments \(n-1\) times we get that \(f\in (EP)\). \(\square \)

Corollary 2.2

Let \(f \in C({\mathbb {R}}^d)\) and let \(c \in Inv\). Let \(\ a_1, \ldots , a_n\in M_d({\mathbb {R}}) \) satisfy conditions \(a_i-c \in Inv\). If, for any \(\ t_1, \ldots , t_n,\)

$$\begin{aligned} \delta _{a_n, t_n} \cdots \delta _{a_1, t_1} (f(x+cy)) \in \Lambda , \end{aligned}$$
(2.8)

then \(f \in (EP)\).

Proof

By (2.3), the previous condition can be written in the form

$$\begin{aligned} \Delta _{(a_n-c)t_n} \cdots \Delta _{(a_1-c)t_1}\ f(x+cy) \in \Lambda . \end{aligned}$$
(2.9)

Setting

$$\begin{aligned} g= \Delta _{(a_n-c)t_n} \cdots \Delta _{(a_1-c)t_1}\ f \end{aligned}$$

we obtain the equation

$$\begin{aligned} g(x+cy) = \sum _{j=1}^M u_j(x)v_j(y). \end{aligned}$$
(2.10)

After the change \(y = c^{-1}z\) we see that (2.10) is the Levi-Cività functional equation for the function g. Therefore, g is an exponential polynomial and (2.9) turns to

$$\begin{aligned} \Delta _{(a_n-c)t_n} \cdots \Delta _{(a_1-c)t_1} f \in (EP) \qquad \text { for all } t_1,\ldots ,t_n\in {\mathbb {R}}^d. \end{aligned}$$
(2.11)

Since all \(a_i-c\) are invertible, one can find, for each n-tuple \(h_1,\ldots ,h_n \in {\mathbb {R}}^d\), an n-tuple \(t_1,\ldots ,t_n\) with \(h_i = (a_i-c)t_i\). Thus

$$\begin{aligned} \Delta _{h_n} \cdots \Delta _{h_1} f(x) \in (EP), \qquad \text {for all}\, \, h_1, \ldots , h_n \in {\mathbb {R}}^d. \end{aligned}$$

It follows from Corollary 2.1 that f is an exponential polynomial. \(\square \)

Lemma 2.1

For every polynomial  q(x), there is an \(h\in {\mathbb {R}}^d\) such that

$$\begin{aligned} \deg (\Delta _h(q)) = \deg (q) - 1. \end{aligned}$$

Proof

Aiming at the contrary suppose that \(\deg (\Delta _h(q)) < n - 1\), for each h, where \(n= \deg (q)\). Let W be a differential operator of order \(n-1\) with constant coefficients, then \(\Delta _h(W(q)) = W(\Delta _h(q)) = 0\). Since h is arbitrary, \(W(q) = const.\) Since W is arbitrary \(\deg (q) \le n-1\), a contradiction. \(\square \)

3 The Proof of the Main Results

It will be convenient to deal firstly with a special case of the Eq. (1.8):

$$\begin{aligned}{} & {} \sum _{i=1}^{r_1} f_{1i}(x+b_1y)P_{1i}(x,y) + \ldots + \sum _{i=1}^{r_n} f_{ni}(x+b_ny)P_{ni}(x,y) \nonumber \\{} & {} \quad = \sum _{j=1}^M u_j(x)v_j(y) \end{aligned}$$
(3.1)

We will prove the following result.

Theorem 3.1

Let \(b_1,\ldots ,b_n \in Inv\) and \(b_{k}-b_j \in Inv\) when \(k\ne j\). Continuous functions \(f_{ik}\) on \({\mathbb {R}}^d\) satisfy the relation (3.1) with some non-trivial families \((P_{ki})_{i=1}^{r_k}\), \(k =1,\ldots ,n\), of polynomials on \({\mathbb {R}}^d\times {\mathbb {R}}^d\) and some continuous functions \({u_j}, {v_j}\) on \({\mathbb {R}}^d\) if and only if, for every k, the family \(f_{k1}, \ldots , f_{kr_{k}}\) is polynomially dependent up to an exponential polynomial.

Proof

Part “if”  is easy: if, for example, \(\sum _{i=1}^{m}f_{1i}(x)p_i(x) = e(x)\), where \((p_i)_{i=1}^{m}\) is a nontrivial family of polynomials and \(e\in (EP)\), then

$$\begin{aligned} \sum _{i=1}^{m}f_{1i}(x+b_1y)p_i(x+b_1y) = e(x+b_1y). \end{aligned}$$

Since \(e(x+b_1y)\in \Lambda \) and \(p_i(x+b_1y)\) are polynomials on \({\mathbb {R}}^d\times {\mathbb {R}}^d\), we obtain the needed relation. So it remains to prove the implication “only if” .

For any \(a\in M_d({\mathbb {R}})\), we denote by \({{\mathcal {L}}}_a\) the polynomial hull of the set of all functions of the form \(f(x+ay)\), where \(f\in C({\mathbb {R}}^d)\), that is the space of all finite sums \(\sum _i f_i(x+ay) Q_i(x,y)\), where \(f_i\in C({\mathbb {R}}^d)\) and \(Q_i\) are polynomials on \({\mathbb {R}}^d\times {\mathbb {R}}^d\).

Let \({{\mathcal {M}}}(d)\) denote the set of all monomials \(\mu (x)= x_1^{m_1}\ldots x_d^{m_d}\) in d variables. Note that each polynomial Q(xy) can be uniquely written in the form

$$\begin{aligned} Q(x,y) = \sum _{\mu \in {{\mathcal {M}}}(d)}\mu (x)p_{\mu }(y), \end{aligned}$$

where all \(p_{\mu }\) are polynomials. Making the change \(x = z-ay\) we write \(Q(x,y)=Q(z-ay,y)\) as a polynomial S(zy); decomposing this polynomial as above:

$$\begin{aligned} S(z,y) = \sum _{\mu \in {{\mathcal {M}}}(d)}\mu (z)q_{\mu }(y), \end{aligned}$$

and returning to the initial variables we obtain

$$\begin{aligned} Q(x,y) = \sum _{\mu \in {{\mathcal {M}}}(d)} \mu (x+ay)q_{\mu }(y), \end{aligned}$$

where all \(q_{\mu }\) are polynomials.

Thus each function \(F=\sum _i f_i(x+ay) Q_i(x,y) \in {{\mathcal {L}}}_a\) can be written in the form \(\sum _k g_k(x+ay)q_k(y)\), where \(g_k\in C({\mathbb {R}}^d)\), \(q_k\) are polynomials.

This implies two important facts:

  1. (i)

    Each subspace \({{\mathcal {L}}}_a\) of \(C({\mathbb {R}}^d\times {\mathbb {R}}^d)\) is invariant under all operators \(\delta _{b,t}\).

The proof follows immediately from the equality (2.2) applied to functions \(g_k(x+ay)q_k(y)\).

  1. (ii)

    The family of all operators \(\delta _{a,t}\), \(t\in {\mathbb {R}}^d\), is locally nilpotent. In other words if \(F \in {{\mathcal {L}}}_a\) then there is \(N\in {\mathbb {N}}\) such that subsequently applying to F any N operators \(\delta _{a,t_i}\) we obtain 0.

This follows from the equality (2.1) applied to functions \(g_k(x+ay)q_k(y)\), if one takes in the account the inequality \(\deg (\Delta _tq)< \deg (q)\).

Let us denote the minimal value of N by \(N_a(F)\) and call it the order of F.

As we already mentioned, all operators \(\delta _{b,t}\) preserve each subspace \({{\mathcal {L}}}_a\). It should be added that they do not increase the orders of functions in \({{\mathcal {L}}}_a\):

$$\begin{aligned} N_a(\delta _{b,t}(F))\leqslant N_a(F), \text { for each }F\in {\mathcal L}_a. \end{aligned}$$
(3.2)

The reason is that all operators \(\delta _{b,t}\) commute with \(\delta _{a,s}\), whence

$$\begin{aligned} \delta _{a,t_1}\delta _{a,t_2}\ldots \delta _{a,t_N}(\delta _{b,t}(F)) = \delta _{b,t}\delta _{a,t_1}\delta _{a,t_2}\ldots \delta _{a,t_N}(F) = 0, \end{aligned}$$

if \(N = N_a(F)\).

Note also that the space \(\Lambda \) is invariant for all operators \(\delta _{b,t}\).

Now we will prove the statement of Theorem 3.1 for the first collection of functions, that is we show that some non-trivial combination of functions \(f_{1i}\), \(1\leqslant i\leqslant r_1\), with polynomial coefficients belongs to (EP). For this let us write (3.1) in the form

$$\begin{aligned} \sum _{i=1}^{r_1} f_{1i}(x+b_1y)P_{1i}(x,y) = F_2+\ldots +F_n + \Pi , \end{aligned}$$
(3.3)

where each \(F_j\) — some function in \({{\mathcal {L}}}_{b_j}\), \(\Pi \) is a function in \(\Lambda \) (the notations are general, we will not change them when \(F_j\) and \(\Pi \) change).

As above \(P_{1i}(x,y) = \sum _{\mu \in {{\mathcal {M}}}(d)} \mu (x+b_1y)q_{i\mu }(y)\), so (3.3) can be written as follows

$$\begin{aligned} \sum _{i=1}^{r_1}\sum _{\mu \in {{\mathcal {M}}}(d)} \mu (x+b_1y)q_{i\mu }(y)f_{1i}(x+b_1y) = F_2+\ldots +F_n + \Pi . \end{aligned}$$

Let \(q_{i_0\mu _0}(y)\) be one of those polynomials \(q_{i\mu }\) that have the maximal degree: \(deg(q_{i_0\mu _0}) = D \geqslant deg(q_{i\mu })\), for all \(i,\mu \). Using Lemma  2.1, choose \(h_1,\ldots ,h_D\) in such a way that each subsequent application of \(\Delta _{h_1}\), \(\Delta _{h_2},\ldots , \Delta _{h_D}\) to \(q_{i_0\mu _0}\) reduce the degree of the polynomial by 1. Thus

$$\begin{aligned} \Delta _{h_D}\ldots \Delta _{h_1}q_{i_0\mu _0} = C_1 \ne 0. \end{aligned}$$

Setting \(\Delta = \Delta _{h_D}\ldots \Delta _{h_1}\), we see that \(\Delta q_{i\mu } = C_{i\mu }\), where \(C_{i\mu } \in {\mathbb {C}}\) (they can be zero), for all \(i,\mu \).

Let now \(\delta = \delta _{b_1,h_D}\ldots \delta _{b_1,h_1}\); applying this operator to both parts of (3.3) and using (2.1), we obtain

$$\begin{aligned} \sum _{i=1}^{r_1}\sum _{\mu \in {{\mathcal {M}}}(d)} \mu (x+b_1y)C_{i\mu }f_{1i}(x+b_1y) = F_2+\ldots +F_n + \Pi . \end{aligned}$$

The function

$$\begin{aligned} \psi (x) = \sum _{i=1}^{r_1} \sum _{\mu \in {\mathcal M}(d)}C_{i\mu }f_{1i}(x)\mu (x) \end{aligned}$$

is a non-trivial polynomial combination of functions \(f_{1i}(x)\). Indeed

$$\begin{aligned} \psi (x) = \sum _i f_{1i}(x)k_i(x) \end{aligned}$$

where \(k_i(x) = \sum _{\mu }C_{i\mu }\mu (x)\) are polynomials. Since \(C_{i_0\mu _0}\ne 0\) the polynomial \(k_{i_0}\) is non-zero.

Now starting with the equality

$$\begin{aligned} \psi (x+b_1y) = F_2+\ldots +F_n + \Pi , \end{aligned}$$

we will show that \(\psi \in (EP)\).

Let \(K_j = N_{b_j}(F_j)\), for \(j=2,3,\ldots ,n\). By definition,

$$\begin{aligned} \delta _{b_2,t_1}\delta _{b_2,t_2}\ldots \delta _{b_2,t_{K_2}}F_2 = 0, \end{aligned}$$

for all \(t_i\in {\mathbb {R}}^d\).

Since all subspaces \({\mathcal {L}}_{b_j}\) are invariant with respect to all operators \(\delta _{b_2,t_1}\), the operators \(\delta _{b_2,t_1}\delta _{b_2,t_2}\ldots \delta _{b_2,t_{K_2}}\) transform all functions \(F_j\), \(j\ne 2\) to other functions (again \(F_j\) by our agreement) in \({{\mathcal {L}}}_{b_j}\) without increasing their orders (see (3.2)). Thus we have now:

$$\begin{aligned} \delta _{b_2,t_1}\delta _{b_2,t_2}\ldots \delta _{b_2,t_{K_2}}\psi (x+b_1y) = F_3+\ldots +F_n + \Pi . \end{aligned}$$

Repeating the same trick we will reduce the number of \(F_j\) till in the right hand side remains only the summand \(\Pi \). In other words, choosing \(a_i\in M_d({\mathbb {R}})\) as follows:

$$\begin{aligned} a_1 = a_2 =\ldots = a_{K_2} = b_2, a_{K_2+1} = \ldots = a_{K_2+K_3} = b_3, \text { and so on}, \end{aligned}$$

we obtain:

$$\begin{aligned} \delta _{a_1,t_1}\delta _{a_2,t_2}\ldots \delta _{a_K,t_K} \psi (x+b_1y) \in \Lambda , \end{aligned}$$

for all \(t_i\in {\mathbb {R}}^d\), \(1\leqslant i\leqslant K\), where \(K = \sum _{j=2}^n K_j\). Since \(a_i - b_1 \in Inv\), for all i, it follows from Corollary 2.2 that \(\psi \in (EP)\). \(\square \)

To deduce Theorem 1.2 from Theorem 3.1 it suffices to denote \(a_j^{-1}c_j\) by \(b_j\) and set \(g_{ji}(x)=f_{ji}(a_jx)\). Clearly if functions \((f_{ij})\) satisfy (1.8) then functions \((g_{ij})\) satisfy (3.1). By Theorem 3.1, all families \(\{g_{ij}:1\leqslant i\leqslant r_j\}\), \(j=1,\ldots ,n\), are polynomially dependent up to an exponential polynomial, so the same is true for the families \(\{f_{ij}:1\leqslant i\leqslant r_j\}\).