1 Introduction

Polynomial inequalities play an exceptional role in approximation and interpolation theory as well as in numerical analysis. They give us primary tools in estimation of approximation errors, e.g. for numerical solutions of differential equations. Some of them are strictly related to main approximation theorems. The best example is the Markov inequality saying that for every polynomial p of N variables

$$\begin{aligned} \Vert |\textrm{grad} \, p | \Vert _E\le M \, (\deg p)^m\Vert p\Vert _E \end{aligned}$$
(1)

where E is a compact subset of \(\mathbb {K}^N\) (\(\mathbb {K}=\mathbb {R}\) or \(\mathbb {C}\)), \(\Vert \cdot \Vert _E\) is the maximum norm on E, \(\textrm{grad} \, p\) is the gradient of p and \(M,m>0\) are constants independent of p and of the degree of p. Sets satisfying the above estimate are usually called Markov sets and property (1) will be called the classical Markov inequality in the paper. It is worth noting that this inequality is strictly related to the Bernstein approximation theorem, Pleśniak’s property, Shur-type estimates and the extension of smooth functions (see e.g. [20]). Moreover, the above inequality provides a tool for constructing norming sets called also admissible meshes on E, i.e. such finite sets \(\mathscr {A}_n\subset E\) that the estimate

$$\begin{aligned} \Vert p\Vert _E \le C_n \Vert p\Vert _{\mathscr {A}_n} \end{aligned}$$
(2)

holds for all polynomials p of degree at most n, where \(C_n\) depends only on E and n (see [9]). It is easy to guess that norming sets are very useful in computer calculations.

One can see that the classical Markov inequality is not satisfied if E is a compact subset of an algebraic variety. However, also on algebraic sets we expect the consequences of (1) mentioned above. The following basic questions naturally arise.

Question 1. Let V be an arbitrary algebraic set in \(\mathbb {C}^N\). How to extend the classical Markov inequality (1) in such a way that

  1. (i)

    it is satisfied on some compact subsets of V and

  2. (ii)

    it gives the possibility of constructing admissible meshes on V?

So far, the known admissible meshes on algebraic varieties involve only some subsets of sphere, torus and certain curves, see [18, 21]. An estimate similar to (1) studied e.g. in [8] (called a tangential Markov inequality), is considered only for real algebraic varieties and seems to not satisfy condition (ii).

In the present paper, we extend the classical Markov inequality (1) to the case of compact subsets of the algebraic variety V (see Def. 3.5). For this purpose, we use reduced Gröbner basis for V to construct a polynomial space W\(\subset \mathbb {C}[z_1,...,z_N]\). This space for a fixed arbitrary algebraic variety \(V\subset \mathbb {C}^N\), can be easily found using Singular (a computer algebra system for polynomial computations). Our definition of Markov inequality on V is strictly related to the space W and coincides with the classical Markov inequality in the case \(V=\mathbb {C}^N\). Moreover, we present methods that allow one to construct admissible meshes on all algebraic hypersurfaces and on certain algebraic varieties of codimension greater than 1, see [5]. In this way we are answering Question 1 in relation to both conditions (i) and (ii) for algebraic varieties mentioned above.

The next problem of interest is to indicate compact subsets of algebraic varieties satisfying our Markov inequality.

Question 2. Let V be an arbitrary algebraic set in \(\mathbb {C}^N\). Is it possible to characterise compact subsets of V satisfying the Markov inequality given in Definition 3.5 ?

We present such a characterization for algebraic hypersurfaces \(V\subset \mathbb {C}^N\). Namely, we prove that the Markov inequality on a compact set \(E\subset V\) is equivalent to the classical Markov inequality (1) on some projection of E into \(\mathbb {C}^{N-1}\), see Theorem 4.1. All assumptions in this theorem are necessary as we show in Remark 4.2 and Examples 4.7, 4.8. Moreover, we prove a similar result for some algebraic varieties of codimension greater than one (Corollary 5.4). We also show a division inequality, called sometimes a Schur-type inequality (Def. 3.1), on algebraic sets, see Theorems 4.5 and 5.3. These results seems to be of independent interest.

The paper is organized as follows. In the second section, just after introduction, we present basic notations and results related to a construction of a polynomial space W that will be used in the definition of the Markov inequality on algebraic sets. In the subsequent section, we give definitions of division and Markov inequalities on algebraic sets and we prove their fundamental properties. The 4th section deals with results concerning algebraic hypersurfaces and contains a characterisation of sets satisfying the Markov inequality as well as two examples. In the last section we study Markov and division inequalities on algebraic sets of codimension greater than one. We present three results and two examples related to this case.

2 Preliminaries

Let \(\mathbb {K}\) be the field \(\mathbb {R}\) or \(\mathbb {C}\) and \(\mathbb {N}=\{1,2,3,\dots \}, \ \mathbb {N}_0=\{0\}\cup \mathbb {N}\). The space of polynomials of N variables with coefficients in \(\mathbb {K}\) will be denoted by \(\mathscr {P}(\mathbb {K}^N)\). The space \(\mathscr {P}_n(\mathbb {K}^N)\) consists of polynomials of degree at most n. However, it is sometimes more convenient to write \(\mathscr {P}(z)\), \(z\in \mathbb {C}^N\) for \(\mathscr {P}(\mathbb {C}^N)\) and analogously \(\mathscr {P}_n(z)\) for \(\mathscr {P}_n(\mathbb {C}^N)\). For an algebraic set \(V\subset \mathbb {C}^N\) we define \(\mathscr {P}(V):=\mathscr {P}(\mathbb {C}^{N})/I(V)\) where I(V) is the ideal in the ring \(\mathscr {P}(\mathbb {C}^{N})\) related to V defined by \( I(V):=\{ p\in \mathscr {P}(\mathbb {C}^{N}) \,: \, p_{|V}=0\}.\) As usual, \(|\alpha |=\alpha _1+\dots +\alpha _N\) and \(D^\alpha =\frac{\partial ^{|\alpha |}}{\partial x_1^{\alpha _1}\dots \partial x_N^{\alpha _N}}\) for \(\alpha =(\alpha _1,\dots ,\alpha _N)\in \mathbb {N}_0^N\).

Consider a general algebraic hypersurface \(V(s)\subset \mathbb {C}^{N+1}\) given by one polynomial equation \(s(z,y)=0\), \(z\in \mathbb {C}^{N}\), \(y\in \mathbb {C}\) such that deg\(_ys\ge 1\). Taking a linear invertible change of variables, if necessary, we can assume that

$$\begin{aligned} s(z,y) = y^k-\sum \limits _{j=0}^{k-1} \widetilde{s_j}(z)\,y^j\ \text{ with }\ k\ge 1. \end{aligned}$$
(3)

In this particular case, we take the following space of polynomials as a family of representatives of (classes from) \(\mathscr {P}(V(s))\)

$$\begin{aligned} \mathscr {P}(z)\otimes \mathscr {P}_{k-1}(y):= & {} \{p\in \mathscr {P}(z,y)\,\ p(z,y)= \sum _{j=0}^{k-1} p_j(z)y^j,\\ {}{} & {} \,\, p_j\in \mathscr {P}(z) \text { for } j=0,\ldots ,k-1\}. \end{aligned}$$

Observe that the mapping

$$\begin{aligned} \mathscr {P}(z)\otimes \mathscr {P}_{k-1}(y) \ni p \ \mapsto \ p_{|_{V(s)}} \in \mathscr {P}(V(s)) \end{aligned}$$

is an isomorphism.

The case of an arbitrary algebraic variety \(V\subset \mathbb {C}^{N+1}\) given by several equations is more complicated. Take n polynomials \(s_1,...,s_n\in \mathscr {P}(\mathbb {C}^{N+1})\) such that

$$\begin{aligned} V=V(s_1,...,s_n):= \{ z=(z_1,...,z_{N+1})\in \mathbb {C}^{N+1}\, : \, s_1(z)=0, ..., \, s_n(z)=0\}. \end{aligned}$$
(4)

We assume that \(V(s_1,...,s_n)\) is not a finite set of points.

Consider the ideal \(I=I(V)\) in the ring of polynomials \(\mathscr {P}(\mathbb {C}^{N+1})\) related to \(V=V(s_1,\ldots ,s_n)\) Fix an ordering \(\preceq \) in the family of monomials \(\mathbb {T}^{N+1}:=\left\{ z^\alpha \,: \, \alpha \in \mathbb {N}_0^{N+1} \right\} \). By Proposition 6 in [11, Ch.2, §7], we find the unique reduced Gröbner basis \(G=\{g_1,...,g_\ell \}\) of I(V) related to \(\preceq \). Taking into account Proposition 1 in [11, Ch.2, §6], for any polynomial \(p\in \mathscr {P}(\mathbb {C}^{N+1})\) we can construct the unique \(r_p\in \mathscr {P}(\mathbb {C}^{N+1})\) and \(q_p\in I(V)\) with the following two properties:

(i) \(p=q_p+r_p\),

\((ii)\, \) no term of \(r_p\) is divisible by any of \(LT(g_1),...,LT(g_\ell )\)

where LT(g) is the leading term of polynomial g. We define \(\textbf{W}_v\) as the complement of I(V) in the space \(\mathscr {P}(\mathbb {C}^{N+1})\) in the sense of condition (i). In other words, \(\textbf{W}_v=\{r_p\,\ p\in \mathscr {P}(\mathbb {C}^{N+1})\}\). Obviously, \(\textbf{W}_v\) is a vector space and \(\textbf{W}_v\cap I(V)=\{0\}\). The set \(\textbf{W}_v\) constructed above is a space of polynomial representatives of (classes from) \(\mathscr {P}(V)\) (related to the ordering \(\preceq \)). The linear mapping

$$\begin{aligned} \Phi \,: \, \textbf{W}_v \ni r \ \mapsto \ r_{|_V} \in \mathscr {P}(V) \end{aligned}$$

is an isomorphism and the inverse map can be stated as follows

$$\begin{aligned} \Phi ^{-1}\, : \, \mathscr {P}(V) \ni p \ \mapsto \ r_p \in \textbf{W}_v \end{aligned}$$
(5)

where \(r_p\) is defined in (i). Moreover, by the definition of Gröbner basis (see Def.5 in [11, Ch.2, §5]), we have

$$\langle LT(g_1),...,LT(g_\ell )\rangle =\langle LT (I)\rangle $$

where \(LT (I):=\{LT(p)\,: \, p\in I\}\).

Remark 2.1

The space \(\textbf{W}_v\) is a set of linear combinations with coefficients in \(\mathbb {C}\), of monomials, none of which is divisible by any of \(LT(g_1),...,LT(g_\ell )\), i.e.

$$\begin{aligned} \textbf{W}_v=\left\{ \sum \,\!\! ' c_\alpha z^\alpha \,: \, c_\alpha \in \mathbb {C}, \ z^\alpha \in \mathbb {T}^{N+1}\setminus LT(G)\cdot \mathbb {T}^{N+1} \right\} \end{aligned}$$

where \(\sum '\) denotes a sum of any finite number of elements and \(LT(G):=\{LT(g_i)\,: \, i=1,...,\ell \}\).

Observe that in the case of an algebraic set given by one polynomial equation \(s(z,y)=0\) where s is of form (3), we have \(\textbf{W}_{v}=\mathscr {P}(z)\otimes \mathscr {P}_{k-1}(y)\) for the reverse lexicographical ordering of monomials \(\mathbb {T}^{N+1}\).

Example 2.2

(cf. [1]) Consider

$$\begin{aligned} s_1(x,y,z)=y^2+z^2-x^2-1, \ \ \ \ s_2(x,y,z)=z^2+yz-2y^2+xz-xy+1 \end{aligned}$$

and \(V=V(s_1,s_2)\). Using Singular (a computer algebra system for polynomial computations), we can find the reduced Gröbner basis of the ideal I(V) for the graded lexicographical ordering of monomials \(\mathbb {T}^3\):

$$\begin{aligned} G=\{s_1, \ s_2, \ s_3\} \ \ \ \ \text{ where } \ \ \ s_3(x,y,z)= y^3+\tfrac{1}{3}y^2z-\tfrac{4}{3}yz^2+\tfrac{1}{3}x-\tfrac{1}{3}y-\tfrac{2}{3}z. \end{aligned}$$

Observe that the space spanned by \(s_1,s_2\) is equal to the ideal I(V), i.e. \(\langle s_1,s_2\rangle = I\). Indeed,

$$\begin{aligned} s_3(x,y,z) = \tfrac{1}{3}(z-y)\,s_1(x,y,z)+\tfrac{1}{3}(x-2y-z)\,s_2(x,y,z). \end{aligned}$$

However, \(\{s_1,s_2\}\) is not a Gröbner basis because \(LT(s_3)=y^3\not \in \langle LT(s_1), LT(s_2)\rangle = \langle x^2, xy\rangle \). We have \(LT(G)=\{x^2, xy, y^3\}\) and by Remark 2.1,

$$\begin{aligned} \textbf{W}_v= & {} \{ a(z)+b(z)x+c(z)y+d(z)y^2\,: \, a,b,c,d\in \mathscr {P}(\mathbb {C})\} \\= & {} \mathscr {P}(z)\otimes \mathscr {P}_1(x) + \mathscr {P}(z)\otimes \mathscr {P}_2(y). \end{aligned}$$

On the other hand, for the lexicographical ordering we obtain the reduced Gröbner basis

$$\begin{aligned} G=\{s_3,s_4\} \ \ \ \ \text{ with } \ \ \ s_4(x,y,z)=y^4-\tfrac{2}{3}y^3z-\tfrac{5}{3}y^2z^2-y^2+\tfrac{4}{3}yz^3+z^2+\tfrac{1}{3}. \end{aligned}$$

Moreover, \(LT(G)=\{x,y^4\}\) and by Remark 2.1, we get

$$\begin{aligned} \textbf{W}_v = \mathscr {P}(z)\otimes \mathscr {P}_3(y). \end{aligned}$$

If we consider the reverse lexicographical ordering then the reduced Gröbner basis contains 4 elements and

$$\begin{aligned} \textbf{W}_v = \mathscr {P}(x)\otimes \mathscr {P}_3(y)+ \mathscr {P}_1(x)\otimes \mathscr {P}_1(z). \end{aligned}$$

Remark 2.3

The set

$$\begin{aligned} \left\{ z^\alpha +I\,: \, z^\alpha \in \mathbb {T}^{N+1}\setminus LT(G)\cdot \mathbb {T}^{N+1} \right\} \end{aligned}$$

is a basis of the space \(\mathscr {P}(V)\).

The dimension of the space \(\textbf{W}_v\) constructed for the algebraic set \(V=V(s_1,...,s_n)\) is finite if and only if the dimension of V is zero. Therefore, we will consider only algebraic sets of a positive dimension.

As an immediate consequence of Remark 2.1, we obtain

Remark 2.4

The space \(\textbf{W}_v\) is invariant under derivation. Moreover, \(\textbf{W}_v\) is invariant under homothety, i.e. \(p\in \textbf{W}_v\) implies \(p\circ \tau \in \textbf{W}_v\) where \(\tau (z):=\lambda z+a\) with \(\lambda \in \mathbb {C}\) and \(a\in \mathbb {C}^{N+1}\). This property allows to prove invariance of Markov inequality under homothety, see Section 3.

It is worth noting that the Gröbner basis technique was also used by Cox and Ma’u in [12] to study Chebyshev constants and the transfinite diameter on algebraic varieties.

The proposition below shows that our concept presented in the next sections makes sense.

Proposition 2.5

For a polynomial s of form (3) the projection

$$\begin{aligned} \pi : V \ni (z,y) \ \mapsto \ z\in \mathbb {C}^{N} \end{aligned}$$
(6)

is a proper mapping. Consequently, the map \(V(s_1,\ldots , s_n) \ni (z,y) \ \mapsto \ z\in \mathbb {C}^{N}\) is proper provided that at least one of \(s_1,\ldots ,s_n\) is of form (3).

Proof

It is easy to check that for \(z\in B(0,R):=\{w\in \mathbb {C}^N\,\ \Vert w\Vert \le R\}\) we have

$$\begin{aligned} \Vert y\Vert \le \max \{R,\sum _{j=0}^{k-1}\Vert \widetilde{s_j}\Vert _{B(0,R)}R^{-(k-1-j)}\}, \end{aligned}$$

and hence (zy) is in a compact set.

Observe that the above statement is not true in general, e.g., for \(s(z,y)=yz\). \(\square \)

3 Division and Markov Inequalities

We start with an inequality that can be easily defined on compact polynomially determining sets (e.g. [14]) as well as on compact subsets of algebraic varieties.

Definition 3.1

Let K be a compact subset of \(\mathbb {C}^N\). We say that K satisfies the division inequality with exponent m if for any polynomial \(q\not \equiv 0\) on K there exists a positive constant M such that for all polynomials \(p\in \mathscr {P}(\mathbb {C}^N)\)

$$\begin{aligned} \Vert p\Vert _K \le M(\deg p + \deg q)^{m\, \deg q}\Vert pq\Vert _K \end{aligned}$$
(7)

where \(\Vert \cdot \Vert _K\) is the sup-norm on the set K. This property is sometimes called a Schur-type inequality.

For a polynomial s given in form (3) consider the set

$$\begin{aligned} \mathscr {A}=\mathscr {A}(s):=\{z\in \mathbb {C}^N\ :\ \textrm{Disc}_y\left( s\right) =0\}=\left\{ z\in \mathbb {C}^N\ :\ \textrm{Res}_y\left( s,\tfrac{\partial s}{\partial y}\right) =0\right\} \end{aligned}$$
(8)

where \(\textrm{Disc}_y\left( s\right) \) is the discriminant of s in y and \(\textrm{Res}_y\left( s,\frac{\partial s}{\partial y}\right) \) is the resultant of \(s,\frac{\partial s}{\partial y}\) in y.

Remark 3.2

For s given in form (3) and a fixed \(z\in \mathbb {C}^{N}\) the polynomial \(y\mapsto s(z,y)\) of one complex variable y has k roots pairwise distinct if and only if \(z\in \mathbb {C}^{N}\setminus \mathscr {A}\).

We will show (see Proposition 3.4) that irreducibility of s implies \(\mathbb {C}^{N}\setminus \mathscr {A}\ne \emptyset \).

For a compact set \(K\subset \mathbb {C}^{N}\), a polynomial vector Q\(\ =[q_0,...,q_{k-1}]^T \in \left( \mathscr {P}(\mathbb {C}^{N})\right) ^{k}\) and a polynomial matrix B\(\ \in \left( \mathscr {P}(\mathbb {C}^{N})\right) ^{k \times k}\), we will use the norms on K defined as follows

$$\begin{aligned} \Vert \textbf{Q}\Vert _K:=\max _{j=0,...,k-1}\{\Vert q_j\Vert _K\}, \ \ \ \ \ \ \Vert \textbf{B}\Vert _K:=\sum _{j=0}^{k-1} \Vert \textrm{Col}_j(\textbf{B})\Vert _K \end{aligned}$$

where \(\textrm{Col}_j(\textbf{B})\) denotes the j-th column of B. As usual, we write I for the identity matrix. It is easy to show that for two polynomial matrices B\(_1\) and B\(_2\) we have \(\Vert \textbf{B}_1 \, \textbf{B}_2 \Vert _K\le \Vert \textbf{B}_1\Vert _K \Vert \textbf{B}_2\Vert _K\).

Proposition 3.3

Let \(V(s)\subset \mathbb {C}^{N+1}\) be an algebraic variety defined by a polynomial s in the form \(s(z,y) = y^k-\sum \nolimits _{j=0}^{k-1}s_j(z)\,y^j\) with \(k\ge 1\), \(K\subset \mathbb {C}^N\) be a compact set and \(E:=\pi ^{-1}(K)\subset V(s)\) where \(\pi \) denotes the projection (6). If K satisfies the division inequality with exponent m and \(K\setminus \mathscr {A}(s)\ne \emptyset \) then

$$\begin{aligned} \Vert [p_0,...,p_{k-1}]\Vert _K \le \ M_0\, (\deg p)^{m_0} \Vert p\Vert _{E} \end{aligned}$$
(9)

for any polynomial p written in the form \(p(z,y)=\sum _{j=0}^{k-1} p_j(z)\, y^j\) on V(s) where \(M_0,\, m_0\ge 0\) are constants independent of \(p_0,\ldots ,p_{k-1}\) and \(m_0=m\,(k-1)\, \deg s\).

Proof

Fix a point \(z\in \mathbb {C}^N\setminus \mathscr {A}\). By Remark 3.2, there exist k pairwise distinct roots of the polynomial \(s(z,\cdot )\), say \(y_1=y_1(z),...,y_k=y_k(z)\). We have the system of k linear equations in \(p_{k-1}(z),...,p_0(z)\)

$$\begin{aligned} p_{k-1}(z)\, y_\ell ^{k-1} +... + p_1(z)\, y_\ell + p_0(z)=p(z,y_\ell ) \ \ \ \ \textrm{for} \ \ \ell =1,...,k. \end{aligned}$$

The matrix of this system is a Vandermonde one and, consequently, the square of its determinant is a non-zero function \(W=W(y_1,...,y_k)\) polynomially depending on \(y_1,...,y_k\) and symmetric in these variables. By the fundamental theorem of symmetric polynomials, W has a unique representation

$$\begin{aligned} W(y_1,...,y_k) = Q(e_1(y_1,...,y_k), ..., e_k(y_1,...,y_k)) \end{aligned}$$

where \(e_1(y_1,...,y_k), ...,e_k(y_1,...,y_k) \) are elementary symmetric polynomials and Q is a polynomial of k variables of degree at most \(\deg _{y_1}\! W\) (see e.g., Theorem 8.4 and its proof in [23, Sec. 8.2]). Thanks to Vieta’s formulas, we obtain

$$\begin{aligned} e_1(y_1,...,y_k)=s_{k-1}(z), \ \ ..., \ \ e_k(y_1,...,y_k)=(-1)^{k+1}s_0(z), \end{aligned}$$

and so

$$\begin{aligned} W(y_1,...,y_k)=Q(s_{k-1}(z), \ \ ..., \ \ (-1)^{k+1}s_0(z))=:\widetilde{Q}(z). \end{aligned}$$

Observe that \(\{z\in \mathbb {C}^N\ :\ \widetilde{Q}(z)=0\}=\mathscr {A}\). Since \(K\setminus \mathscr {A}\ne \emptyset \), we see that \(\widetilde{Q}\not \equiv 0\) on K. Solving the above system of linear equations, for any \(z\in \mathbb {C}^N\setminus \mathscr {A}\) and \(j\in \{0,...,k-1\}\) we have

$$\begin{aligned} p_j^2(z)=[q_{1j}(y_1,...,y_k)\, p(z,y_1)+...+q_{kj}(y_1,...,y_k)\, p(z,y_k)]^2/W(y_1,...,y_k) \end{aligned}$$

where \(q_{1j},...,q_{kj}\) are polynomials of k variables independent of p.

Take a point \(z_0\in K\) where the norm \(\Vert p_j^2\,\widetilde{Q}\Vert _K\) is attained and such that \(\widetilde{Q}(z_0)\ne 0\) (in the case \(p_j=0\) on K). Consequently,

$$\begin{aligned} \Vert p_j^2\,\widetilde{Q}\Vert _K\!=&{} |p_j^2(z_0)\, W(y_1(z_0),...,y_k(z_0))|\!\\=&{} |q_{1j}(y_1,...,y_k)\, p(z_0,y_1)+...+q_{kj}(y_1,...,y_k)\, p(z_0,y_k)|^2\\\le&{} \Vert q_{1j}(y_1,...,y_k)\, p(z,y_1)+...+q_{kj}(y_1,...,y_k)\, p(z,y_k)\Vert _{K}^2. \end{aligned}$$

Since the projection \(\pi \) given by (6) is a proper mapping, the set \(E=\pi ^{-1}(K)\) is compact and

$$\begin{aligned} \Vert q_{1j}(y_1,...,y_k)\, p(z,y_1)+...+q_{kj}(y_1,...,y_k)\, p(z,y_k) \Vert _{K} \le \widetilde{C} \ \Vert p\Vert _{E} \end{aligned}$$

with a constant \(\widetilde{C}\) independent of p. By means of the division inequality,

$$\begin{aligned} \Vert p_j ^2\Vert _{K}\le & {} C(\deg \widetilde{Q} + 2\,\deg p_j)^{m\, \deg \widetilde{Q}} \Vert p_j^2\, \widetilde{Q}\Vert _{K}\\\le & {} C\, \widetilde{C} ^2 (\deg \widetilde{Q} + 2\,\deg p_j)^{m\, \deg \widetilde{Q}} \Vert p\Vert _{E}^2 \le M\, (\deg p)^{m\, \deg \widetilde{Q}}\Vert p\Vert _E^2 \end{aligned}$$

for \(j=0,...,k-1\). Since \(\deg Q \le \deg _{y_1} W\), we have \(\deg \widetilde{Q} \le \deg s\, \deg _{y_1} W = 2(k-1)\, \deg s\) and inequality (9) holds with \(m_0 = m\, (k-1)\,\deg s\).

Note that for \(k=1\) we have \(m_0=0\) that is a direct consequence of inequality (9) or of the formula \( m_0 = m\, (k-1)\,\deg s\).\(\square \)

Proposition 3.4

If a polynomial \(w\in \mathscr {P}(z,y)\) (with \(z\in \mathbb {C}^N, \ y\in \mathbb {C})\) of degree \(k\ge 1\) in y is irreducible in the ring \(\mathscr {P}(z,y)\) then for some \(z_0\in \mathbb {C}^N\) the polynomial \(y\mapsto w(z_0,y)\) has k roots pairwise distinct, i.e. \(\mathbb {C}^N\setminus \mathscr {A}(w)\ne \emptyset \).

Proof

Here we will use the algebraic notation \(\mathbb {C}[z]\) for the ring of polynomials and \(\mathbb {C}(z)\) for the field of rational functions. If the polynomial w is irreducible in \(\mathscr {P}(z,y)=\mathbb {C}[z][y]\) then by the Gauss lemma (see e.g., Corollary 4.4 and Theorem 4.7 in [19]), w is irreducible in \(\mathbb {C}(z)[y]\) too. Consequently, w and \(\tfrac{\partial w}{\partial y}\) are coprime in \(\mathbb {C}(z)[y]\). By Bézout’s identity, \(a\,w+b\;\tfrac{\partial w}{\partial y}=1\) for some \(a,b\in \mathbb {C}(z)[y]\). Therefore, for some \(\ell \in \mathbb {N}_0\) and \(a_0,...,a_\ell ,\tilde{a}_0,...,\tilde{a}_\ell \in \mathbb {C}[z]\) we can write the function a in the following form: \(a(z,y)=\sum _{j=0}^\ell \tfrac{a_j(z)}{\tilde{a}_j(z)}\,y^j\). And similarly, we can do the same for b. Considering common denominators in a and b, we can find \(c,\, d\in \mathbb {C}[z][y]\) and \(g\in \mathbb {C}[z], \, g\not \equiv 0\) satisfying the formula \(c\,w+d\;\tfrac{\partial w}{\partial y}=g.\) For any point \(z\in \mathbb {C}^{N}\) such that \(g(z)\ne 0\) and for any \(y\in \mathbb {C}\) we have

$$c(z,y)\,w(z,y)+d(z,y)\;\tfrac{\partial w}{\partial y}(z,y)=g(z).$$

Therefore, it is impossible that \(w(z,y_0)=\tfrac{\partial w}{\partial y}(z,y_0)=0\) for some \(y_0\).\(\square \)

By Proposition 3.4 irreducibility is sufficient for w to have the property \(\mathbb {C}^N\setminus \mathscr {A}(w)\ne \emptyset \). However, this is not a necessary condition, because the polynomial \(w(z,y)=y^2-z^2\), \(z,y\in \mathbb {C}\) has two distinct roots \(y_1,y_2\) for any \(z\in \mathbb {C}{\setminus } \{0\}\).

Lemma 3.9 will show that condition (9) is equivalent to a Markov type estimate in y. Before stating it, we give a definition of Markov inequality.

Let \(\textbf{W}\) be an infinite dimensional subspace of \(\mathscr {P}(\mathbb {C}^N)\) which is invariant under derivation.

Definition 3.5

A compact set \(K\subset \mathbb {C}^N\) satisfies the Markov inequality for polynomials from \(\textbf{W}\) with exponent \(m>0\) (cf. [4, Def.20]) if there exists a constant \(M>0\) such that for all \(p\in \textbf{W}\)

$$\begin{aligned} \Vert |\textrm{grad} \, p | \Vert _K\le M \, (\deg p)^m\Vert p\Vert _K. \end{aligned}$$
(10)

A set K with the above property is called a \(\textbf{W}\)-Markov set.

It is worth noting that for a specific algebraic hypersurface \(V=\{(z,y)\in \mathbb {C}^{N-1}\times \mathbb {C}\,: \, y^k=s(z)\}\), where \(s\in \mathbb {C}[z_1,...z_{N-1}]\), the Markov inequality for polynomials of variables \((z,y)\in \mathbb {C}^N\) of degree at most \(k-1\) in y satisfies conditions (i) and (ii) of Question 1, as it was observed in [4]. This agrees with our general construction of the space W presented in this paper, if we take a Gröbner basis for the reverse lexicographic ordering in the family of monomials. The Markov inequality introduced in [4] for these specific hypersurfaces was applied to polynomial approximation in [3], cf. [2] and [16].

Remark 3.6

If K is a \(\textbf{W}\)-Markov set then for all \(\alpha \in \mathbb {N}_0^N\) and \(p\in \textbf{W}\) we have

$$\begin{aligned} \Vert D^{\alpha } p \Vert _K\le M^{|\alpha |}(\deg p)^{m|\alpha |}\Vert p\Vert _K. \end{aligned}$$

Consequently, the set K is determining for the space \(\textbf{W}\), i.e. any polynomial from \(\textbf{W}\setminus \{0\}\) does not vanish identically on K.

We can see that if \(\textbf{W}\) is invariant under homothety and K is a \(\textbf{W}\)-Markov set, then so is \(\lambda K+a\) for any \(\lambda \in \mathbb {C}\) and \(a\in \mathbb {C}^N\).

The most interesting case concerns the space \(\textbf{W}=\textbf{W}_v\) constructed in Sect. 2 for an algebraic variety \(V=V(s_1,\ldots ,s_n)\subset \mathbb {C}^N\), because the space \(\textbf{W}_v\) contains a representative of each class from \(\mathscr {P}(V)\) and the mapping \( \textbf{W}_v \ni p \ \mapsto \ p_{|_V} \in \mathscr {P}(V)\) is an isomorphism. Moreover, \(\textbf{W}_v\) is invariant under derivation and homothety.

Corollary 3.7

If a compact subset E of an algebraic variety \(V=V(s_1,\ldots ,s_n)\subset \mathbb {C}^N\) is a \(\textbf{W}_v\)-Markov set then E is determining for polynomials from \(\mathscr {P}(V)\).

Clearly, Definition 3.5 is a generalization of Markov inequality (1), it is sufficient to take W\(\,=\mathscr {P}(\mathbb {C}^N)\). In this case, estimate (10) is often called the classical Markov inequality and was studied by Baran, Bos, Goetgheluck, Goncharov, Milman, Pleśniak and many others. A set \(K\subset \mathbb {C}^N\) satisfying the classical Markov inequality is polynomially determining sets and it is sometimes called a Markov set.

Theorem 3.8

([4, Th.3, Corol.6]). Every Markov set satisfies the division inequality. More precisely, inequality (10) with \(\textbf{W}=\mathscr {P}(\mathbb {C}^N)\) and exponent m implies division inequality (7) with the same exponent m. Moreover, if K is a Markov set in \(\mathbb {C}^N\) and \(\mathbb {B}\in \left( \mathscr {P}(\mathbb {C}^N)\right) ^{k\times k}\) is a polynomial matrix whose determinant is a non-zero polynomial then

$$\begin{aligned} \Vert \mathbb {P}\Vert _K\le M (n+\deg \det \mathbb {B})^{m\deg \det \mathbb {B}}\Vert \mathbb {B}\mathbb {P}\Vert _K \end{aligned}$$

where \(M>0\) is independent of \(\mathbb {P}\in \left( \mathscr {P}_n(\mathbb {C}^N)\right) ^k\).

Lemma 3.9

Let \(\textbf{W}\) be an infinite dimensional subspace of \(\mathscr {P}(\mathbb {C}^N)\), invariant under derivation. If F is a compact subset of \(\mathbb {C}^{N+1}\) and \(m>0\) then the following conditions are equivalent:

  1. (i)

    for all polynomials \(p\in \textbf{W}\otimes \mathscr {P}_{k-1}(y)\),

    $$\begin{aligned} \left\| \tfrac{\partial p}{\partial y}\right\| _F \ \le \ C (\deg p)^m\Vert p\Vert _F \end{aligned}$$

    with constants Cm independent of p,

  2. (ii)

    for all \(p\in \textbf{W}\otimes \mathscr {P}_{k-1}(y)\) in the form \(p(z,y)=\sum _{j=0}^{k-1} p_j(z)\, y^j\),

    $$\begin{aligned} \max _{j=0,...,k-1} \!\Vert p_j\Vert _{\pi (F)} \le M\, (\deg p)^{\mu } \Vert p\Vert _F \end{aligned}$$

    where \(M,\mu \) are positive constants independent of p and \(\pi \) is the projection \(\mathbb {C}^{N+1}\ni (z,y)\mapsto z\in \mathbb {C}^N\).

Proof

Fix a polynomial \(p\in \textbf{W}\otimes \mathscr {P}_{k-1}(y)\) in the form \(p(z,y)=\sum \nolimits _{j=0}^{k-1} p_j(z)\, y^j \) with \(p_j\in \textbf{W}\). First, we prove \((i)\Rightarrow (ii)\). Consider the column polynomial vector \(\textbf{P}:=[p_0,\ldots ,p_{k-1}]^T\) and the invertible matrix

$$\begin{aligned} \textbf{A}:=\left[ \begin{array}{ccccc} 1 &{} y &{} y^2 &{} \ldots &{} y^{k-1} \\ 0 &{} 1 &{} 2y &{} \ldots &{} (k-1) y^{k -2} \\ 0 &{} 0 &{} 2 &{} \ldots &{} (k-1)(k-2)y^{k-3} \\ \vdots &{} &{} &{} \ddots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} (k-1) ! \end{array} \right] . \end{aligned}$$

For any \(j\in \{0,..., k-1 \}\) we can write

$$\begin{aligned} \Vert p_j\Vert _{F} \le&{}\Vert {\textbf {P}}\Vert _F=\Vert {\textbf {A}}^{-1}{} {\textbf {AP}}\Vert _F \le \Vert {\textbf {A}}^{-1}\Vert _F \Vert {\textbf {AP}}\Vert _F \\=&{}\Vert {\textbf {A}}^{-1}\Vert _F \max \{ \Vert p_0+yp_1 +y^2\!p_2+...+y^{k-1}\!p_{k-1}\Vert _F,\\{}&{} \Vert p_1+2y\,p_2+3y^2 p_3+...+(k-1) y^{k-2} p_{k-1}\Vert _F,...,\Vert (k-1)! p_{k-1}\Vert _F \} \\=&{} \Vert {\textbf {A}}^{-1}\Vert _F\, \max \{ \Vert p\Vert _F, \Vert \tfrac{\partial p}{\partial y} \Vert _F,...,\Vert \tfrac{\partial ^{k-1}p}{\partial y^{k-1}}\Vert _F \}. \end{aligned}$$

Applying (i) we get

$$\begin{aligned} \Vert p_j\Vert _F \le C^{k-1} \Vert \textbf{A}^{-1}\Vert _F\,(\deg p)^{(k-1) m} \Vert p\Vert _F. \end{aligned}$$

Since every element of the matrix \(\textbf{A}^{-1}\) is equal to zero or to \(a\,y^j\) with some \(a\in [-1,1]\) and \(j\in \{ 0,...,k-1\}\), we have

$$\begin{aligned} \Vert \textbf{A}^{-1}\Vert _F\le \sum _{j=0}^{k-1} R^{k-1}=k\, R^{k-1} \end{aligned}$$

where \(R:=\max \{1,\max \{|y|\,: \, y\in \pi _y(F)\}\}\), \(\pi _y(F):=\{y\in \mathbb {C}\,: \, (z,y)\in F \ \mathrm{for \ some \ } z\in \mathbb {C}^{N}\}\). This easily implies condition (ii).

To show \((ii)\Rightarrow (i)\) observe that from (ii) we obtain

$$\begin{aligned} \left\| \tfrac{\partial p}{\partial y}\right\| _F \ \le \sum _{j=1}^{k-1} \Vert p_j\Vert _{\pi (F)}\, j\Vert y^{j-1}\Vert _F \le \ \sum \limits _{j=1}^{k-1} \, j\Vert y^{j-1}\Vert _F \, M (\deg p)^\mu \Vert p\Vert _F \end{aligned}$$

which gives (i) with \(C=M\sum \nolimits _{j=1}^{k-1} \, j\Vert y^{j-1}\Vert _F\), \(m=\mu \).\(\square \)

4 Inequalities on Algebraic Hypersurfaces

Let \(V=V(s)\) be an algebraic hypersurface given by a polynomial \(s\in \mathscr {P}(z,y)\), \(z\in \mathbb {C}^N\), \(y\in \mathbb {C}\). We can assume, taking a linear change of variables if necessary, that s is in form (3). Consider the reverse lexicographical ordering in the family of monomials \(\mathbb {T}^{N+1}\). In this section we denote by \(\textbf{W}\) the space of polynomial representatives of \(\mathscr {P}(V)\)

$$\begin{aligned} \textbf{W}=\textbf{W}_v=\mathscr {P}(z)\otimes \mathscr {P}_{k-1}(y) \end{aligned}$$

that is invariant under derivative and homothety.

One of the most important consequences of Proposition 3.3 is the Markov inequality on algebraic hypersurfaces.

Theorem 4.1

Let E be a compact subset of V(s) and \(\pi \) be the projection given by (6). Assume that \(\mathbb {C}^N\setminus \mathscr {A}(s)\ne \emptyset \) and \(E=\pi ^{-1}(\pi (E))\). Then

$$\begin{aligned} E \text{ is } \text{ a } \textbf{W}\text{-Markov } \text{ set } \text{ on } V(s) \text{ if } \text{ and } \text{ only } \text{ if } \pi (E) \text{ is } \text{ a } \text{ Markov } \text{ set } \text{ in } \mathbb {C}^N. \end{aligned}$$

Proof

Let \(K:=\pi (E)\subset \mathbb {C}^N\) be a Markov set. By Theorem 3.8, K satisfies the division inequality. Since every Markov set is polynomially determining and \(\mathscr {A}\) is an algebraic set in \(\mathbb {C}^N\), we obtain \(K\setminus \mathscr {A}\ne \emptyset \). From Proposition 3.3 and Lemma 3.9 we get (9) and Markov inequality for derivative with respect to y for polynomials from \(\textbf{W}\). Regarding derivatives with respect to z, we have

$$\begin{aligned} \left\| \frac{\partial p}{\partial z_l}\right\| _E\le & {} \sum _{j=0}^{k-1} \left\| \frac{\partial p_j}{\partial z_l}\right\| _K\Vert y^j\Vert _E\\\le & {} k M_1 (\deg p)^m\Vert [p_0,\ldots , p_{k-1}]\Vert _K \le k M_0 M_1 (\deg p)^{m+m_0}\Vert p\Vert _E \end{aligned}$$

for \(l=1,\ldots , N\), \(p(z,y)=\sum _{j=0}^{k-1}p_j(z)y^j\in \textbf{W}\). This gives the Markov inequality for polynomials from \(\textbf{W}\) on the set E.

The converse is obvious.\(\square \)

Recall that if s is irreducible then \(\mathbb {C}^N\setminus \mathscr {A}(s)\ne \emptyset \), see Proposition 3.4.

Remark 4.2

The condition \(\mathbb {C}^N\setminus \mathscr {A}(s)\ne \emptyset \) is a necessary assumption for the Markov inequality for polynomials from \(\textbf{W}=\textbf{W}_v=\mathscr {P}(z)\otimes \mathscr {P}_{k-1}(y)\). The simplest example showing this is given by \(s(z,y)= (y-z)^2\), \(E=\{(z,y)\in V(s)\, \ z\in [-1,1]\}\subset \mathbb {C}^2\), because for the polynomial \(p(z,y)=y-z\) the norm of p on E vanishes while \(\left\| \frac{\partial p}{\partial y}\right\| _E=1\).

Remark 4.3

Examples given at the end of this section show that the assumption \(\pi ^{-1}(\pi (E))=E\) in Theorem 4.1 is necessary for the Markov inequality on E for polynomials from \(\textbf{W}\). This is true not only for reducible (see Example 4.7) but also for irreducible algebraic sets (Example 4.8).

Regarding the division inequality on algebraic hypersurfaces, we have the following

Theorem 4.4

Let K be a compact subset of \(\mathbb {C}^{N}\) and s be an irreducible polynomial in form (3). If K is a Markov set then \(E=\pi ^{-1}(K)\subset V(s)\) satisfies the division inequality and is a \(\textbf{W}\)-Markov set in V(s).

This is an easy consequence of Theorem 4.1, Proposition 3.4 and of the next result which does not require s to be irreducible.

Theorem 4.5

Let \(E\subset V(s)\) be a compact set. Assume that \(\mathbb {C}^N\setminus \mathscr {A}(s)\ne \emptyset \) and \(E=\pi ^{-1}(\pi (E))\). If \(\pi (E)\) is a Markov set in \(\mathbb {C}^N\) then for every polynomial \(q\in \mathscr {P}(z,y)\) coprime to s such that \(q_{|_E}\not \equiv 0\) there exist \(M,m>0\) such that

$$\begin{aligned} \Vert p\Vert _E \le M(\deg p + \deg q)^{m\, \deg q}\Vert pq\Vert _E \end{aligned}$$
(11)

for all polynomials \(p\in \mathscr {P}(\mathbb {C}^{N+1})\).

Proof

Fix a polynomial \(q\in \mathscr {P}(z,y)\), \(q_{|_E}\not \equiv 0\), coprime to s. Find \(\mathfrak {q}\in \textbf{W}\) such that \(\mathfrak {q}=q\) on V(s) and write it in the form \(\mathfrak {q}(z,y)=\sum _{j=0}^{k-1}q_j(z)y^j\). Observe that

$$\begin{aligned} \deg \mathfrak {q}\le \deg q\cdot \deg s. \end{aligned}$$

Fix also a polynomial \(p\in \mathscr {P}(z,y)\) and find \(\mathfrak {p}\in \textbf{W}\) such that \(\mathfrak {p}=p\) on V(s) and \(\mathfrak {p}(z,y)=\sum _{j=0}^{k-1} p_j(z)\, y^j\). Let \(\textbf{P}:=[p_0,...,p_{k-1}]^T\). Consider the matrix \(\textbf{M}_{\mathfrak {q}}^s\) such that for \((z,y)\in V\) we have

$$\begin{aligned} \mathfrak {p}(z,y)\, \mathfrak {q}(z,y) \ = \ [1,y,...,y^{k-1}] \ \textbf{M}_{\mathfrak {q}}^s \ \textbf{P}(z). \end{aligned}$$

The determinant of \(\textbf{M}_{\mathfrak {q}}^s\) is a polynomial only in z and is equal to the resultant Res\(\,_y(\mathfrak {q},s)\) (see e.g. [10, Prop.1.5, Chap.3]). Since q is coprime with s, Res\(_y\,(\mathfrak {q},s)=\ \)Res\(_y\,(q,s)\) is a non-zero polynomial in \(\mathbb {C}^N\) and so is \(\det \textbf{M}_{\mathfrak {q}}^s\). Thanks to the Markov inequality on \(K:=\pi (E)\), by Theorem 3.8, we have

$$\begin{aligned} \Vert p\Vert _E\le&{} \sum _{j=0}^{k-1} \Vert p_j\Vert _{K} \Vert y\Vert _E^j \le \! \Vert {\textbf {P}}\Vert _{K} \sum _{j=0}^{k-1} \Vert y\Vert _E^j \\\le&{} c (\deg \det \, {\textbf {M}}_{\mathfrak {q}}^s + \!\!\! \max _{j=0,...,k-1} \!\!\!\! \deg p_j)^{m\, \deg \det \, {\textbf {M}}_{\mathfrak {q}}^s} \Vert {\textbf {M}}_{\mathfrak {q}}^s {\textbf {P}}\Vert _{K} \sum _{j=0}^{k-1} \Vert y\Vert _E^j. \end{aligned}$$

Observe that

$$\begin{aligned} \deg \, \det \, {\textbf {M}}_{\mathfrak {q}}^s\le&{} k\deg \mathfrak {q} + \tfrac{(k-1)k}{2} \deg s \\{} \le&{} k\deg s \, \deg q + \tfrac{(k-1)k}{2} \deg s = k\deg s \, (\deg q +\tfrac{k-1}{2}). \end{aligned}$$

On the other hand,

$$\begin{aligned} \Vert \textbf{M}_{\mathfrak {q}}^s \, \textbf{P}\Vert _{K}=\Vert [u_0,...,u_{k-1}]\Vert _{K} \text{ where } (\mathfrak {p}\mathfrak {q})(z,y)=\sum \limits _{j=0}^{k-1} u_j(z)\, y^j \text{ on } V(s). \end{aligned}$$

Since K is a Markov set, it is polynomially determining and consequently, \(K{\setminus }\mathscr {A}\ne \emptyset \) and K satisfies the division inequality. By Proposition 3.3,

$$\begin{aligned} \Vert [u_0,...,u_{k-1}]\Vert _{K}\le & {} M_0 (\deg (\mathfrak {p}\mathfrak {q}))^{m_0} \Vert \mathfrak {p}\mathfrak {q}\Vert _E\\\le & {} M_0 (\deg s)^{m_0}(\deg p + \ \deg q)^{m_0} \Vert pq\Vert _E \end{aligned}$$

and the proof is complete.\(\square \)

Remark 4.6

Example 4.7 given below shows that if E is a compact subset of an algebraic variety V then (in general) the \(\textbf{W}_v\)-Markov inequality does not imply the division inequality. This is in marked contrast to the classical case of polynomial determining sets where every Markov set satisfies the division inequality, see Theorem 3.8. However, for any irreducible algebraic variety \(V=V(s)\) if \(E\subset V\), \(\pi ^{-1}(\pi (E))=E\) and E is a \(\textbf{W}_v\)-Markov set then E satisfies the division inequality. This is a consequence of Theorems 4.1 and 4.4.

For reducible algebraic sets we have a weak version of division inequality, see Theorem 4.5. Example 4.7 shows that irreducibility in Theorem 4.4 and the assumption of coprime polynomials in Theorem 4.5 cannot be omitted.

Moreover, two examples given below show that the assumption \(\pi ^{-1}(\pi (E))=E\) is necessary in Theorem 4.1. The first of them concerns a reducible case and the next one is more complicated because deals with an irreducible algebraic set.

Example 4.7

Consider the set \(E=E_1\cup E_2\subset \mathbb {C}^2\) where

$$\begin{aligned} E_1:=\{(t,t)\,\ t\in [-1,1]\}\ \text{ and } \ E_2:=\{(t,-t)\,\ t\in [-1,1]\}. \end{aligned}$$

By means of Theorems 4.1 and 4.5, we can show that E satisfies the Markov inequality for polynomials from \(\mathscr {P}(z)\otimes \mathscr {P}_1(y)\) as well as for \(\mathscr {P}_1(z)\otimes \mathscr {P}(y)\). Indeed, we see that \(E=\{(z,y)\in \mathbb {C}^2 \,: \, s(z,y):=y^2-z^2=0, \ z\!\in \![-1,1]\}\), \(\mathscr {A}(s)=\{0\}\) and \([-1,1]\) is a Markov set. Consequently, for the reverse lexicographical ordering in the family of monomials \(\mathbb {T}^2\) we get the Markov inequality on E for polynomials from \(\mathscr {P}(z)\otimes \mathscr {P}_1(y)\). Analogously, \(E=\{(z,y)\in V(s) \,: \, y\!\in \![-1,1]\}\) and for the lexicographical ordering we obtain the Markov inequality on E for \(\mathscr {P}_1(z)\otimes \mathscr {P}(y)\).

On the other hand, we can observe that \(E_1\) (and \(E_2\) analogously) does not satisfy the Markov inequality neither for \(\mathscr {P}(z)\otimes \mathscr {P}_1(y)\) nor for \(\mathscr {P}_1(z)\otimes \mathscr {P}(y)\). To see this, it is sufficient to consider \(p(z,y)=z-y\in (\mathscr {P}(z)\otimes \mathscr {P}_1(y))\cap (\mathscr {P}_1(z)\otimes \mathscr {P}(y)).\)

Regarding the division inequality, we can show that (7) does not hold on E. Indeed, if we consider \(q(z,y)=z+y\), \(p(z,y)=z-y\) then \(q\not \equiv 0\) on E, \(\Vert p\Vert _E=\Vert p\Vert _{E_2}=2\) and \(\Vert pq\Vert _E=0\). Therefore, we see that the polynomials q and s have to be coprime in the assumption of Th. 4.5.

Example 4.8

Fix two coprime numbers \(\alpha ,\beta \in \mathbb {N}\) and let

$$\begin{aligned} V=V(\alpha , \beta )=\{(t^\beta ,t^\alpha )\,: \, t\in \mathbb {C}\}, \ \ \ \ \ \ \ F=F_{\alpha , \beta }=\{(t^\beta ,t^\alpha )\,: \, t\in [0,1]\}. \end{aligned}$$

By Bézout’s identity, we can show that

$$\begin{aligned}{} & {} V=V(s_{\alpha ,\beta }) \ \ \ \textrm{for} \ \ s_{\alpha ,\beta }(z,y)=z^\alpha -y^\beta , \ \ \ \textrm{and} \\{} & {} F=\{(z,y)\in V(s_{\alpha ,\beta })\,: \, z,y\in [0,1]\}. \end{aligned}$$

Consider also the set

$$\begin{aligned} E=E_{\alpha ,\beta } = \{(z,y)\in V(s_{\alpha ,\beta })\,: \, z\in [0,1]\}. \end{aligned}$$

The polynomial \(s_{\alpha ,\beta }\) is irreducible and for the reverse lexicographical ordering in \(\mathbb {T}^2\), by Theorem 4.4, we see that E satisfies the Markov inequality for polynomials from \(\textbf{W}_{\beta }:=\mathscr {P}(z)\otimes \mathscr {P}_{\beta - 1}(y)\) that is the space of representatives of \(\mathscr {P}(V)\).

We will compare the Markov inequality for polynomials from \(\textbf{W}_{\beta }\) with the tangential Markov inequality that was studied only for some real analytic sets, see e.g. [7]. Therefore, we are interested here in the real part of \(E_{\alpha ,\beta }\), i.e.

$$\begin{aligned} E_{\alpha ,\beta }\cap \mathbb {R}^2=\{(z,y)\,\ z^\alpha =y^\beta , \ z\in [0,1],\ y\in [-1,1]\}. \end{aligned}$$

For any even \(\beta \) we have \(E_{\alpha ,\beta }\cap \mathbb {R}^2=\left\{ (z,y) \,\ z\in [0,1],\ y=\pm z^{\frac{\alpha }{\beta }}\right\} \) and if \(\beta \) is odd then

$$\begin{aligned} E_{\alpha ,\beta }\cap \mathbb {R}^2=\left\{ (z,y) \,\ z\in [0,1],\ y= z^{\frac{\alpha }{\beta }}\right\} = F_{\alpha ,\beta }. \end{aligned}$$

The set \(E_{\alpha ,1}=F_{\alpha ,1}\) satisfies the Markov inequality for polynomials from \(\mathscr {P}(z)\).

To deal with the case \(\beta \ge 2\), we recall Gonchar’s result (see e.g., [22]):

$$\begin{aligned} \textrm{dist}_{[0,1]}(x^{\delta },\mathscr {R}_{nn})\le e^{-c(\delta )\sqrt{n}}\ \text{ for } \delta >0,\ n\in \mathbb {N} \end{aligned}$$

where \(\textrm{dist}_{[0,1]}(x^{\delta },\mathscr {R}_{nn})\) is the distance on [0, 1] of the function \(f(x)=x^{\delta }\) from the set of rational functions u/v with \(u,v\in \mathscr {P}_n(\mathbb {R})\) and \(c(\delta )\) is a positive constant depending only on \(\delta \).

For \(\beta \ge 2\) and \(n\in \mathbb {N}\) take the rational function u/v which is the best approximation of \(x^{\frac{\alpha }{\beta }}\) from \(\mathscr {R}_{nn}\). We will show that \(F_{\alpha ,\beta }\) does not satisfy the Markov inequality for \(\textbf{W}_{\beta }\). Consider the polynomial \(p(z,y)=u(z)-v(z)\, y\in \textbf{W}_{\beta }\). Suppose, contrary to our claim, that the Markov inequality holds

$$\begin{aligned} \Vert v\Vert _{[0,1]}= & {} \Vert v\Vert _{F_{\alpha ,\beta }}=\left\| \frac{\partial p}{\partial y}\right\| _{F_{\alpha ,\beta }}\le \ M (n+1)^m\Vert p\Vert _{F_{\alpha ,\beta }}\\= & {} M (n+1)^m \max _{z\in [0,1]}\left| u(z)-v(z)\,z^{\frac{\alpha }{\beta }}\right| . \end{aligned}$$

Take such a point \(z_0\in [0,1]\) where the maximum is attained. Observe that \(v(z_0)\ne 0\), because otherwise, the inequality

$$\begin{aligned} e^{-c(\delta )\sqrt{n}} \ \ge \ \left\| z^{\alpha / \beta }-\frac{u(z)}{v(z)}\right\| _{[0,1]} \ \ge \ \ \left| z_0^{\alpha / \beta }-\frac{u(z_0)}{v(z_0)}\right| \end{aligned}$$

implies \(u\equiv 0\) and \(v\equiv 0\) that is impossible. Consequently,

$$\begin{aligned} \Vert v\Vert _{[0,1]} \ {}\le & {} M (n+1)^m \left| u(z_0)-v(z_0)\,z_0^{\frac{\alpha }{\beta }}\right| = M (n+1)^m \ |v(z_0)| \left| \frac{u(z_0)}{v(z_0)} - z_0^{\frac{\alpha }{\beta }}\right| \\\le & {} M (n+1)^m \ \Vert v\Vert _{[0,1]} \max _{z\in [0,1]}\left| \frac{u(z)}{v(z)}-z^{\frac{\alpha }{\beta }}\right| \le M (n+1)^m \ \Vert v\Vert _{[0,1]} \ e^{-c(\delta )\sqrt{n}}, \end{aligned}$$

and we get a contradiction. Hence the set \(F_{\alpha ,\beta }\) does not satisfy the Markov inequality for polynomials from \(\textbf{W}_\beta \).

Corollary 4.9

The set \(E_{\alpha ,\beta }\) satisfies the Markov inequality for polynomials from \(\textbf{W}_\beta \) while \(F_{\alpha ,\beta }\) does not for \(\beta \ge 2\). Therefore, the condition \(\pi ^{-1}(\pi (E))\) is a necessary assumption in Theorem 4.1. Moreover, for \(\beta =2\) the set \(E_{\alpha , \beta } \) is contained in \(\mathbb {R}^2\), so \(F_{\alpha ,2}\) gives a counterexample showing that this assumption is necessary also for the real version of Theorem 4.1.

It is worth considering here the curves \(F_r:=\{(x,y)\,: \, y=x^r, \, 0\le x\le 1\}\) where \(r\in [1,\infty )\). Bloom, Levenberg, Milman and Taylor proved in [8] that for \(r=\frac{q}{p}\) with positive integers \(q> p\) in the lowest terms, the set \(F_r\) admits the tangential Markov inequality with exponent \(\ell =2p\) that is optimal. In 2005 Gendre generalized this example and proved that every algebraic curve in \(\mathbb {R}^N\) admits a local tangential Markov inequality at each of its point (even singular) with some exponent \(\ell \), see [13]. In [17] we can find an example showing that the best exponent \(\ell \) depends on the location of singularities. Namely, if \(q\ge 3\) is an odd number then \(E_q=\{(t^2,t^q):t\in [-1,1] \}\) admits the tangential Markov inequality with exponent 2 and this is the best possible. It is easy to see that this exponent is two times smaller than the exponent for \(F_{\frac{q}{2}}\subset E_q\). One might expect that the tangential Markov inequality with exponent greater than one would characterize semialgebraic sets with singularities but there are examples of exponential curves which admit the tangential Markov inequality with exponent 4 that is optimal (see [6]).

5 Inequalities on Algebraic Sets of Codimension Greater than One

Let \(V\subset \mathbb {C}^{N+1}\) be an algebraic set of codimension greater than one. Assume that V can be defined (after a linear change of variables, if necessary) by \(n+1\) polynomials \(s,s_1,...,s_n\in \mathscr {P}(\mathbb {C}^{N+1})\) such that \(s_1,...,s_n\) are polynomials of \(z=(z_1,...,z_N)\) and

$$\begin{aligned} s(z,y)\ = \ y^{k}+\sum _{j=0}^{k-1}\tilde{s}_{j}(z)\, y^j. \end{aligned}$$

In other words,

$$\begin{aligned}{} & {} V=\{(z,y)\in \mathbb {C}^{N+1}\, : \, z\in V_0, \ s(z,y)=0\} \ \ \text{ and } \nonumber \\{} & {} V_0=\{z\in \mathbb {C}^{N}\, : \, s_1(z)=...=s_n(z)=0\}. \end{aligned}$$
(12)

Consider the projection \(\pi \) given by (6). Observe that \(\pi (V)=V_0\). Let W\(\subset \mathscr {P}(z)\) be a space of representatives of \(\mathscr {P}(V_0)\) constructed as in Section 2. We can easily prove that \(\textbf{W}\otimes \mathscr {P}_{k-1}(y)\) is a space of representatives of \(\mathscr {P}(V)\).

Proposition 5.1

Let E be a compact subset of V given by (12) such that \(E=\pi ^{-1}(\pi (E))\). If \(V_0\setminus \mathscr {A}(s)\ne \emptyset \) and \(\pi (E)\) satisfies the division inequality and the \(\textbf{W}\)-Markov inequality then E satisfies the Markov inequality for \(\textbf{W}\otimes \mathscr {P}_{k-1}(y)\).

Proof

Let \(K=\pi (E)\) satisfy the division inequality and the Markov inequality for polynomials from \(\textbf{W}\subset \mathscr {P}(\mathbb {C}^N)\). Taking into account Corollary 3.7 we see that \(K\setminus \mathscr {A}(s)\ne \emptyset \). Applying the same reasoning as in the proof of Proposition 3.3 we obtain inequality (9) for polynomials \(p\in \textbf{W}\otimes \mathscr {P}_{k-1}(y)\). By Lemma 3.9, we get inequality (i), i.e. the Markov inequality for the derivative of p with respect to y. Regarding the derivatives with respect to z we proceed as in the proof of Theorem 4.1 and we obtain the Markov inequality on E for polynomials from W\(\,\otimes \mathscr {P}_{k-1}(y)\).\(\square \)

To write an analogous result to Theorems 4.4 and 4.5, we give a definition of coprime and irreducible polynomials on an algebraic set.

Definition 5.2

Let \(\mathscr {V}\subset \mathbb {C}^N\) be an algebraic set and \(q,s\in \mathscr {P}(z,y)\) where \(z\in \mathbb {C}^N\), \(y\in \mathbb {C}\). We say that polynomials q and s are coprime (or relatively prime) on \(\mathscr {V}\) if Res\(_y(q,s)\not \equiv 0\) on \(\mathscr {V}\). The polynomial s is said to be irreducible on \(\mathscr {V}\) if it is relatively prime on \(\mathscr {V}\) with any polynomial \(q\in \mathscr {P}(z,y)\), \(q\not \equiv 0\) on \(\mathscr {V}\).

The irreducibility of s on \(\mathscr {V}\) implies that Res\(_y\left( \tfrac{\partial s}{\partial y},s\right) \not \equiv 0\) on \(\mathscr {V}\) as far as s is a non-constant polynomial in y. Consequently, \(\mathscr {V}\setminus \mathscr {A}(s)\ne \emptyset \). Repeating the proof of Theorem 4.5 we obtain the following result.

Theorem 5.3

Let E be a compact subset of V given by (12) such that \(E=\pi ^{-1}(\pi (E))\). If \(V_0\setminus \mathscr {A}(s)\ne \emptyset \) and \(\pi (E)\) satisfies the division inequality and the \(\textbf{W}\)-Markov inequality then for every polynomial \(q\in \mathscr {P}(z,y)\) coprime to s on \(V_0\), \(q_{|_E}\not \equiv 0\) there exist \(M,m>0\) such that

$$\begin{aligned} \Vert p\Vert _E \le M(\deg p + \deg q)^{m\, \deg q}\Vert pq\Vert _E \end{aligned}$$
(13)

for all polynomials \(p\in \mathscr {P}(\mathbb {C}^{N+1})\).

Corollary 5.4

Let E be a compact subset of V given by (12) such that \(E=\pi ^{-1}(\pi (E))\). If s is an irreducible polynomial on \(V_0\) then E is a \(\textbf{W}\!\!\otimes \!\!\mathscr {P}_{k-1}(y)\)-Markov set and satisfies the division inequality if and only if \(\pi (E)\) is a \({\textbf {W}} \otimes \mathscr {P}_{k-1}(y)\)-Markov set and satisfies the division inequality.

It is perhaps worth remarking that the above corollary is a generalization of Theorem 4.4 because if we take \(V_0=\mathbb {C}^N\) then W\(\, = \mathscr {P}(z)\).

The above results have an easy form for some algebraic sets of codimension two. To show this, consider an algebraic set \(V\subset \mathbb {C}^{N+2}\) given by polynomials \(s_1\) and \(s_2\) in the following forms

$$\begin{aligned} s_1(z,y_1)\ = \ y_1^{d_1}+\sum _{j=0}^{d_1-1}\tilde{s}_{1,j}(z)\, y_1^j, \ \ \ \ \ \ \ s_2(z,y_1,y_2)\ = \ y_2^{d_2}+\sum _{j=0}^{d_2-1}\tilde{s}_{2,j}(z,y_1)\, y_2^j\end{aligned}$$
(14)

with \(z\in \mathbb {C}^N\), \(y_1,\,y_2\in \mathbb {C}\), \(\tilde{s}_{1,0},...,\tilde{s}_{1,d_1-1}\in \mathscr {P}(\mathbb {C}^N)\) and \(\tilde{s}_{2,0},...,\tilde{s}_{2,d_2-1}\in \mathscr {P}(\mathbb {C}^{N+1})\). Observe that in this case dim\(\,V=N\). Let

$$\begin{aligned} {}&{} V_1=\{(z,y_1)\!\in \mathbb {C}^{N+1}: s_1(z,y_1)=0\},\\{}&{} V=\{(z,y_1,y_2)\!\in \mathbb {C}^{N+2}: s_1(z,y_1)=0, \, s_2(z,y_1,y_2)=0\},\\{}&{} \pi _1: V_1 \ni (z,y_1) \mapsto z\in \mathbb {C}^{N}, \ \ \ \ \ \ \pi _2: V \ni (z,y_1,y_2) \mapsto (z,y_1)\in V_1 \end{aligned}$$

Assume that \(s_1\) is an irreducible polynomial and \(s_2\) is irreducible on \(V_1\). Fix a compact set \(K\subset \mathbb {C}^N\) and consider

$$\begin{aligned} E_1:=\pi _1^{-1}(K) \subset V_1, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ E:=\pi ^{-1}(E_1) \subset V. \end{aligned}$$

By Theorem 4.4, if K is a Markov set in \(\mathbb {C}^N\) then the set \(E_1\) satisfies the Markov inequality for polynomials from \(\mathscr {P}(z)\otimes \mathscr {P}_{d_1-1}(y_1)\) and the division inequality. From Corollary 5.4 the set E satisfies the Markov inequality for polynomials from \(\mathscr {P}(z)\otimes \mathscr {P}_{d_1-1}(y_1)\otimes \mathscr {P}_{d_2-1}(y_2)\) and the division inequality.

Corollary 5.5

Let \(V=V(s_1,s_2)\subset \mathbb {C}^{N+2}\) be an algebraic set given by polynomials \(s_1\) and \(s_2\) of forms (14) and \(\pi \) be the projection \(\pi : V \ni (z,y_1,y_2) \mapsto z\in \mathbb {C}^{N}\). Assume that \(s_1\) is an irreducible polynomial and \(s_2\) is irreducible on \(V_1\). If K is a Markov set in \(\mathbb {C}^N\) then \(E=\pi ^{-1}(K)\subset V\) satisfies the division inequality and the Markov inequality for polynomials from \(\textbf{W}=\mathscr {P}(z)\otimes \mathscr {P}_{d_1-1}(y_1)\otimes \mathscr {P}_{d_2-1}(y_2)\) that is a space of representatives of \(\mathscr {P}(V)\).

Finding a Gröbner basis containing polynomials of a convenient form is possible for a wide class of algebraic sets. Let \(V=V(s_1,\ldots , s_n)\subset \mathbb {C}^N\) be an algebraic variety and L be a linear change of N complex variables, by \(L^*(V)\) we denote the algebraic variety defined by polynomials \(s_1\circ L, \ldots , s_n\circ L\). From Noether normalization theorem (see e.g. [15, Th.3.4.1]), for any algebraic set V there exists a linear invertible change of N complex variables L such that in the ideal \(I(L^*(V))\) related to \(L^*(V)\) there exist polynomials

$$\begin{aligned}{} & {} g_j(x_1,\ldots ,x_d,y_1,\ldots , y_{N-d})\nonumber \\{} & {} \quad =y_j^{d_j}+\sum _{k=0}^{d_j-1} g_{jk}(x_1,\ldots ,x_d,y_1,\ldots ,y_{j-1})y_j^k \ \ \text{ for } j=1,\ldots ,N-d \end{aligned}$$
(15)

where d is the dimension of the algebraic variety \(L^*(V)\). Denote \(V_{N-d}=V(g_1,\ldots ,g_{N-d})\) and \(G=\{g_1,\ldots ,g_{N-d}\}\). Then \(L^*(V)\subset V_{N-d}\) and the dimension of \(V_{N-d}\) equals d. Moreover, taking the grevlex ordering and using Buchberger’s algorithm we can show that G is a Gröbner basis of \(V_{N-d}\). Indeed, for the leading terms of functions from G we have

$$\begin{aligned} LT(g_j)=y_j^{d_j}=LM(g_j), \ \ \ \ j\in \{1,\ldots ,N-d\} \end{aligned}$$

where LM(g) denotes the leading monomial of g. For any \(j,k\in \{1,\ldots ,N-d\}\), \(j\ne k\) the least common multiple of \(y_j^{d_j}\), \(y_k^{d_k}\) is \(y_j^{d_j}y_k^{d_k}\), so the leading monomials of \(g_j\) and \(g_k\) are relatively prime. From [11, Prop.4, §9, chap.2], S-polynomial of \(g_j\) and \(g_k\) reduces to zero modulo G for all \(j\ne k\). Hence, by [11, Th.3, §9, chap.2], G is a Gröbner basis of \(V_{N-d}\).

If \(V_{N-d}\) is an irreducible algebraic set then G is a Gröbner basis of \(L^*(V)\). In this situation we can prove the Markov and division inequalities for compact subsets \(E\subset L^*(V)\) such that \(\Pi (E)\) is Markov set in \(\mathbb {C}^d\) and \(E=\Pi ^{-1}(\Pi (E))\) where \(\Pi \) is the projection \(L^*(V)\) into \(\mathbb {C}^d\).

In [15] we can find some algorithms (see e.g. Algorithm 3.4.5 and Singular Example 3.4.6) to construct a linear change of variables L guaranteed by Noether normalization. An algebraic set of dimension \(0<n<N\) in \(\mathbb {C}^N\) defined by \(N-n\) polynomial equations is usually called a (set-theoretic) complete intersection.

Example 5.6

Take two polynomials

$$\begin{aligned}{} & {} q_1(x,y,z)=5x^2+3z^2+y^2+6xz+2yz-4, \\{} & {} q_2(x,y,z)=4y^2+4xz+8yz+7z^2-10x-10z+4. \end{aligned}$$

They give an algebraic curve \(V=V(q_1,q_2)\) in \(\mathbb {C}^3\) but they are not of form (14). We can take the linear invertible change of variables

$$\begin{aligned} L(x,y,z)=(-x+z, -2x+y+z,\ 2x-z). \end{aligned}$$

Then \(L^*(V(q_1,q_2))\) is an algebraic set defined by the polynomials:

$$\begin{aligned}{} & {} (q_1\circ L)(x,y,z)=x^2+y^2+z^2-4, \\{} & {} (q_2\circ L)(x,y,z)=4x^2+4y^2-z^2-10x+4. \end{aligned}$$

Since \((q_2\circ L)(x,y,z)=5(x^2+y^2-2x)-x^2-y^2-z^2+4\), we see that \(L^*(V(q_1,q_2))=V(g_1,g_2)\) where

$$\begin{aligned} g_1(x,y,z)=x^2+y^2+z^2-4, \ \ \ g_2(x,y,z)=x^2+y^2-2x. \end{aligned}$$

Consider the set \(E=\{(x,y,z)\in {V(g_1,g_2)}:\ x\in [0,2]\}\) and observe that it is an entirely real curve called Viviani’s window. The set \(\{z^2+2x-4,\, y^2+x^2-2x\}\) is the reduced Gröbner basis of form (14) for the reverse lexicographical ordering in the family of monomials \(\mathbb {T}^3\). By Corollary 5.5, the set E satisfies the Markov inequality for polynomials from \(\mathscr {P}(x)\otimes \mathscr {P}_{1}(y)\otimes \mathscr {P}_{1}(z)\). Moreover, the set E satisfies the division inequality.

Example 5.7

Consider

$$\begin{aligned} V=V(yz+1,x^3-xz^2-y^2)\subset \mathbb {C}^3 \ \ \ \ \ \textrm{and} \ \ \ \ \ E=\{(x,y,z)\in V\,: \, |z|=1\}. \end{aligned}$$

The form of polynomials determining V does not admit a direct use of Corollary 5.5. However, we can write the set E in the form

$$\begin{aligned} E=&{} \{(x,y,z)\in \mathbb {C}^3\,: \, \tfrac{1}{4}(y+z)^2 -\tfrac{1}{4}(z-y)^2+1=0, \ x^3-xz^2-y^2=0, \ |z|=1\} \\=&{} \{(x,y,z)\in \mathbb {C}^3\,: \, \tfrac{1}{4}(y+z)^2 -\tfrac{1}{4}(z-y)^2+1=0, \ x^3-xz^2-y^2=0, \\ {}&{} \tfrac{1}{2}(z-y)\in [-1,1]\}, \end{aligned}$$

because \(\frac{1}{2}(z-y)=\frac{1}{2}(z+\frac{1}{z})\) on V and for the Joukowski transform \(\varphi (z)=\frac{1}{2}(z+\frac{1}{z})\) we have \(|z|=1\) if and only if \(\varphi (z)\in [-1,1]\). Therefore,

$$\begin{aligned}{} & {} E=\{(x,u,w)\in \mathbb {C}^3\,: \, u^2-w^2+1=0,\\{} & {} x^3-x(u+w)^2-(u-w)^2=0, \ w\in [-1,1]\} \end{aligned}$$

where \(u=\frac{1}{2}(y+z)\), \(w=\frac{1}{2}(z-y)\). By Corollary 5.5, the set E satisfies the Markov inequality for polynomials from \(\mathscr {P}_2(x)\otimes \mathscr {P}_1(u)\otimes \mathscr {P}(w)\) and the division inequality. Since the Markov inequality is invariant under linear change of variables (i.e. the constant m is invariant), the set E satisfies the Markov inequality for polynomials from \(\mathscr {P}_2(x)\otimes \mathscr {P}_1(y+z)\otimes \mathscr {P}(z-y)\) that is a space of representatives of \(\mathscr {P}(V)\).