## 1 Introduction

Let us start recalling the Poincaré–Miranda theorem. We consider a rectangle $${\mathcal {R}}$$ in $$\mathbb {R}^N$$,

\begin{aligned} {\mathcal {R}}=[a_1,b_1]\times \dots \times [a_N,b_N], \end{aligned}

and a continuous function $$f:{\mathcal {R}}\rightarrow \mathbb {R}^N$$. For every $$x\in {\mathcal {R}}$$, we write

\begin{aligned} f(x)=(f_1(x),\ldots ,f_N(x)), \end{aligned}

thus defining the components $$f_k:\mathcal{R}\rightarrow \mathbb {R}$$, with $$k=1,\ldots ,N$$. The theorem states that, if each component $$f_k$$ changes sign on the corresponding opposite faces

\begin{aligned} {\mathcal {F}}_k^-=\{x\in {\mathcal {R}}:x_k=a_k\},\quad {\mathcal {F}}_k^+=\{x\in {\mathcal {R}}:x_k=b_k\}, \end{aligned}

then there is a point in $${\mathcal {R}}$$ where all the components of f are equal to zero.

### Theorem 1

(Poincaré–Miranda) Assume that, for $$k=1,\ldots ,N$$, either

\begin{aligned} f_k(x)\left\{ \begin{array}{ll} \le 0,&{}\quad \mathrm{for}\,\mathrm{every}\, x\in {\mathcal {F}}_k^-,\\ \ge 0,&{}\quad \mathrm{for}\,\mathrm{every}\,x\in {\mathcal {F}}_k^+, \end{array} \right. \end{aligned}
(1a)

or

\begin{aligned} f_k(x)\left\{ \begin{array}{ll} \ge 0,&{}\quad \mathrm{for}\,\mathrm{every}\,x\in {\mathcal {F}}_k^-,\\ \le 0,&{}\quad \mathrm{for}\,\mathrm{every}\, x\in {\mathcal {F}}_k^+. \end{array} \right. \end{aligned}
(1b)

Then, there is an $${\bar{x}}\in {\mathcal {R}}$$ such that $$f({\bar{x}})=0$$.

As recalled in the nice survey paper [20] (see also [19]), this theorem was stated in 1883 by Jules Henri Poincaré [25], with just a hint of proof, and then forgotten for a long time. In 1940, Silvio Cinquini [6] rediscovered its statement, but his proof was not complete, and it was one year later that Carlo Miranda [22] proved the equivalence of Theorem 1 with Brouwer’s fixed point theorem.

Poincaré was mainly motivated by the study of periodic solutions for differential equations arising from the three-body problem, which can be obtained as fixed points of what is now called the Poincaré map. After his pioneering work, many other researchers found applications of Theorem 1, and different generalizations have been proposed, see e.g. [5, 16, 21, 23, 24, 28, 29, 3134].

The aim of this paper is to generalize Theorem 1 by replacing the sign condition on the components of the vector field with an avoiding cones condition. We will also consider different shapes for the domain of the function f, instead of just rectangles, leading to situations where the topological degree can be different from $$\pm 1$$. The starting point to obtain these results will be a variant of the Poincaré–Bohl theorem.

When the domain D of the function f is a convex body and contains the origin in its interior, our modification of the Poincaré–Bohl theorem consists in replacing the usual assumption, requiring the vector field to avoid the rays arising from the origin, by asking instead that it avoids the normal cones. Clearly enough, when the boundary of the set is smooth, this means avoiding the rays determined by the normal vector field: in this case, such a condition has been tackled in [13]. The modified Poincaré–Bohl theorem will be introduced in Sect. 2, while a first generalization of Theorem 1 will be given in Sect. 3.

We then propose in Sect. 4 a different viewpoint, interpreting the domain D as a truncated convex body. This leads us to the problem of reconstructing a larger convex body E where to extend our function f. The set E is obtained by glueing D with some convex sets $$C_1,\ldots ,C_M$$, where M denotes the number of truncations. In Sect. 5, we argue about the optimal choice of the reconstruction E, which minimizes the amplitude of the cones to be avoided by the vector field f.

In Sect. 6, we state our main result for truncated convex bodies. Here, the avoiding cones condition is introduced in its most general form, assuming that the field f avoids the inner normal cones in the regions of the boundary of D where the truncation occurs, and the outer normal cones on the remaining part of the boundary. In this situation, we show that the topological degree of f on D is equal to $$\pm (1-M)$$. This will proved in Sect. 8, by suitably extending the function f to the larger set $$E=D\cup C_1\cup \dots \cup C_M$$, and using the additivity of the degree. Then we show how the convexity assumption on the set D can be weakened, by only requiring the set D to be diffeomorphic to a convex body.

The structure of the Poincaré–Miranda theorem is typical of the gradient of a potential V in a neighbourhood of a non-degenerate saddle point, where the domain is split into two subspaces, in such a way that the vector field is expansive on one subspace and contractive on the other one [20, 23, 24]. More generally, also degenerate saddle points can be characterized by their associated expansive and contractive directions.

In Sect. 7, we provide an illustrative example showing how our results apply to such situations, considering the function $$f=\nabla V$$ in a neighbourhood of a degenerate multi-saddle point. Indeed, we can deal with some more general situations, and show that the degree of $$\nabla V$$ is equal to $$\pm (1-M)$$, where M is the number of connected components of the set of boundary points where $$\nabla V$$ points outwards. This type of results has already been treated by many authors (see e.g. [9, 27, 30] and the references therein).

## 2 A variant of the Poincaré–Bohl theorem

We would like to discuss about the well-known Poincaré–Bohl theorem, which we recall, together with its proof, for the reader’s convenience.

### Theorem 2

(Poincaré–Bohl) Assume that $$\varOmega$$ is an open bounded subset of $$\mathbb {R}^N$$, with $$0\in \varOmega$$, and that $$f:\overline{\varOmega }\rightarrow \mathbb {R}^N$$ is a continuous function such that

\begin{aligned} f(x)\not \in \{\alpha x:\alpha >0\},\quad \mathrm{for}\, \mathrm{every}\,x\in \partial \varOmega . \end{aligned}
(2)

Then, there is an $${\bar{x}}\in \overline{\varOmega }$$ such that $$f({\bar{x}})=0$$.

### Proof

We may assume that $$f(x)\ne 0$$ for every $$x\in \partial \varOmega$$, since otherwise there is nothing more to prove. Consider the homotopy $$F:\overline{\varOmega }\times [0,1]\rightarrow \mathbb {R}^N$$ defined by

\begin{aligned} F(x,\lambda )=(1-\lambda )f(x)-\lambda x. \end{aligned}

We show that $$0\notin F(\partial \varOmega \times [0,1])$$. By contradiction, assume that there are an $$x\in \partial \varOmega$$ and a $$\lambda \in [0,1]$$ such that $$F(x,\lambda )=0$$. Then $$\lambda \ne 0$$, since by the above assumption $$f(x)\ne 0$$, and $$\lambda \ne 1$$, since $$0\in \varOmega$$. So, $$\lambda \in \,]0,1[$$ and, setting $$\alpha =\lambda /(1-\lambda )$$, we see that $$\alpha >0$$, and $$f(x)=\alpha x$$, a contradiction.

Therefore, we can compute the Brouwer topological degree:

\begin{aligned} d_B(f,\varOmega )=d_B(F(\cdot ,1),\varOmega )=d_B(F(\cdot ,0),\varOmega )=d_B(-I,\varOmega )=(-1)^N. \end{aligned}

(Here and in the following, I denotes the identity function.) The conclusion readily follows. $$\square$$

We now state a variant of Theorem 2, in the case when the set $$D=\overline{\varOmega }$$ is a convex body, i.e. a compact convex set in $$\mathbb {R}^N$$ with non-empty interior. Note that every convex body coincides with the closure of its interior.

Given a point $${\bar{x}}\in D$$, we define the normal cone to D in $${\bar{x}}$$ as

\begin{aligned} {\mathcal {N}}_D({\bar{x}})=\left\{ v\in \mathbb {R}^N: \left\langle v,x-{\bar{x}}\right\rangle \le 0, \text { for every }x\in D\right\} , \end{aligned}
(3)

where as usual, $$\left\langle \cdot \,,\cdot \right\rangle$$ denotes the Euclidean scalar product in $$\mathbb {R}^N$$, with associated norm $$\left||\cdot \right||$$. Trivially, $${\mathcal {N}}_D(x)=\{0\}$$ for every $$x\in {{\mathrm{int}}}D$$. On the other hand, it can be shown that, if $$x\in \partial D$$, then its normal cone contains at least a halfline. For $$x\in \partial D$$, we write $${\mathcal {N}}^0_D(x)$$ to denote $${\mathcal {N}}_D(x)$$ deprived of the origin, i.e. $${\mathcal {N}}^0_D(x)={\mathcal {N}}_D(x){\setminus } \{0\}$$. Clearly, if the boundary is smooth at x, then $${\mathcal {N}}^0_D(x)=\{\alpha \nu (x):\alpha >0\}$$, where $$\nu :\partial D\rightarrow \mathbb {R}^N$$ denotes the unit outer normal vector field.

We denote by $$\pi _D:\mathbb {R}^N \rightarrow D$$ the projection on the convex set D: for every $${\bar{x}}\in \mathbb {R}^N$$, $$\pi _D({\bar{x}})$$ is the only element of D satisfying

\begin{aligned} {{\mathrm{dist}}}({\bar{x}}, \pi _D({\bar{x}}))\le {{\mathrm{dist}}}({\bar{x}}, x), \quad \text {for every} \ x\in D. \end{aligned}

We remark that $$\pi _D$$ is a continuous function, and

\begin{aligned} \pi _D({\bar{x}}+v)={\bar{x}}\quad \Leftrightarrow \quad v\in {\mathcal {N}}_D({\bar{x}}). \end{aligned}

In the following, when dealing with a continuous function $$f:D\rightarrow \mathbb {R}^N$$, defined on a convex body D, with $$0\notin f(\partial D)$$, we denote by $$\deg (f,D)$$ the Brouwer degree $$d_B(f,{{\mathrm{int}}}D)$$.

### Theorem 3

Assume that $$D\subseteq \mathbb {R}^N$$ is a convex body and that $$f:D\rightarrow \mathbb {R}^N$$ is a continuous function such that

\begin{aligned} f(x)\notin {\mathcal {N}}^0_D(x),\quad \mathrm{for}\,\mathrm{every}\,x\in \partial D. \end{aligned}
(4)

Then, there is an $${\bar{x}}\in D$$ such that $$f({\bar{x}})=0$$. Furthermore, if $$~0\notin f(\partial D)$$, then

\begin{aligned} \deg (f,D)=(-1)^N. \end{aligned}
(5)

### Proof

We first notice that it is not restrictive to assume that $$0\in {{\mathrm{int}}}D$$. Moreover, we may assume that $$f(x)\ne 0$$ for every $$x\in \partial D$$, since otherwise there is nothing more to prove.

Let B be an open ball, centred at the origin and such that $$D\subseteq B$$. We consider the homotopy $$F:\overline{B} \times [0,1]\rightarrow \mathbb {R}^N$$ defined by

\begin{aligned} F(x,\lambda )={\left\{ \begin{array}{ll} 2\lambda (\pi _D(x)-x)+(1-2\lambda ) f(\pi _D(x)), &{}\text {for }0\le \lambda \le \frac{1}{2},\\ 2 (1-\lambda ) \pi _D(x) - x, &{}\text {for } \frac{1}{2}\le \lambda \le 1 . \end{array}\right. } \end{aligned}

We check that $$0\notin F(\partial B\times [0,1])$$. For $$\lambda \in [0,1/2]$$, we observe that, for every $$x\in \partial B$$, $$\pi _D(x)-x\in -{\mathcal {N}}^0_D(\pi _D(x))$$ but at the same time $$f(\pi _D(x))\notin {\mathcal {N}}_D (\pi _D(x))$$. For $$\lambda \in [1/2,1]$$, we notice that, by construction, $$\left||x\right||>\left||\pi _D(x)\right||$$ for every $$x\in \partial B$$. Thus, by the properties of the topological degree,

\begin{aligned} (-1)^N=d_B(-I, B)=d_B(f \circ \pi _D,B)=\deg (f,D), \end{aligned}

where in the last equality we used the excision property, since $$f(\pi _D(x))\ne 0$$ for every $$x\in \overline{B}{\setminus } {{\mathrm{int}}}D$$. $$\square$$

Notice that, when D is a ball centred at the origin, then $$\nu (x)=x/\left||x\right||$$ for every $$x\in \partial D$$, so that conditions (2) and (4) are equivalent. The differences between these two conditions in a general case are illustrated in Fig. 1.

We remark that the notion of normal cone allows to extend the idea of inward and outward direction to more sophisticated situations. For generalizations of Theorem 3 in this sense, we refer to [4, 8, 18].

Clearly, the avoiding outer cones condition (4) can be replaced by an avoiding inner cones condition, by just changing the sign of the function f. In this case, the degree in (5) becomes $$\deg (f,D)=1$$. However, in analogy with similar results in the literature, in our exposition, we prefer dealing with outer cones, which also have the advantage to allow a more intuitive visualization.

## 3 From Poincaré–Bohl to Poincaré–Miranda: a first generalization

Assumption (1) in the Poincaré–Miranda theorem geometrically means that either on both faces $${\mathcal {F}}_k^-$$ and $${\mathcal {F}}_k^+$$ the vector field points outwards, or on both faces it points inwards, for every $$k=1,\ldots ,N$$. We will try to replace this assumption by an avoiding inner/outer cones condition.

We first show how to prove the Poincaré–Miranda theorem by the use of the Poincaré–Bohl theorem (i.e. by Theorem 2). This will help us towards some generalizations.

### Proof of Theorem 1

We first notice that there is no loss of generality in assuming that 0 is in the interior of $${\mathcal {R}}$$. Furthermore, we can define a new function $$\tilde{f}:\mathcal{R}\rightarrow \mathbb {R}^N$$ whose components are $${\tilde{f}}_k=\pm f_k$$, in such a way that

\begin{aligned} {\tilde{f}}_k(x_1,\ldots ,a_k,\ldots ,x_N)\ge 0\ge {\tilde{f}}_k(x_1,\ldots ,b_k,\ldots ,x_N), \end{aligned}

for every $$(x_1,\ldots ,x_N)\in {\mathcal {R}}$$ and every $$k=1,\ldots ,N$$. Notice that

\begin{aligned} f(x)=0\quad \Leftrightarrow \quad {\tilde{f}}(x)=0. \end{aligned}

It is easily verified that, for $${D}={\mathcal {R}}$$, the assumptions of Theorem 2 are satisfied, and the proof is completed. $$\square$$

The above proof shows that, defining an appropriate linear function $$\eta :\mathbb {R}^N\rightarrow \mathbb {R}^N$$ with associated matrix being diagonal, and whose elements on the diagonal are either 1 or $$-1$$, then the transformation $${\tilde{f}}=\eta \circ f$$ satisfies the assumptions of Theorem 2, and the conclusion immediately follows. With this idea in mind, we now provide a generalization of Theorem 3.

### Theorem 4

Let $$h:\mathbb {R}^N\rightarrow \mathbb {R}^N$$ be a homeomorphism, such that $$h(0)=0$$, and assume that $$D\subseteq \mathbb {R}^N$$ is a convex body. Let $$f:D\rightarrow \mathbb {R}^N$$ be a continuous function such that

\begin{aligned} f(x)\notin h({\mathcal {N}}^0_D(x)),\quad \mathrm{for}\, \mathrm{every}\,x\in \partial D. \end{aligned}

Then, there is an $${\bar{x}}\in D$$ such that $$f({\bar{x}})=0$$.

### Proof

Define $$g :D\rightarrow \mathbb {R}^N$$ as $$g=h^{-1}\circ f$$. Then $$g(x)\notin {\mathcal {N}}^0_D(x)$$, for every $$x\in \partial D$$, and Theorem 3 provides the existence of an $${\bar{x}}\in D$$ such that $$g({\bar{x}})=0$$. Being h invertible, we have that $$f({\bar{x}})=0$$, as well, thus concluding the proof. $$\square$$

As a simple and direct consequence of Theorem 4, let $$N_1$$ and $$N_2$$ be positive integers such that $$N_1+N_2=N$$, and $$K_1\subseteq \mathbb {R}^{N_1}$$, $$K_2\subseteq \mathbb {R}^{N_2}$$ be two convex bodies. Define, for every $$x=(x_1,x_2)\in K_1\times K_2$$,

\begin{aligned} {\mathcal {A}}_{c}(x)=\left\{ \begin{array}{ll} {\mathcal {N}}_{K_1}(x_1) \times \{0\},&{}\quad \hbox {if } x \in \partial K_1 \times {{\mathrm{int}}}\, K_2,\\ \{0\}\times (-{\mathcal {N}}_{K_2}(x_2)),&{}\quad \hbox {if } x \in {{\mathrm{int}}}\, K_1 \times \partial K_2,\\ {\mathcal {N}}_{K_1}(x_1)\times (-{\mathcal {N}}_{K_2}(x_2)),&{}\quad \hbox {if } x \in \partial K_1 \times \partial K_2, \end{array} \right. \end{aligned}

and let $${\mathcal {A}}_{c}^0(x)={\mathcal {A}}_{c}(x){\setminus }\{0\}$$.

### Corollary 5

Let $$f:K_1\times K_2\rightarrow \mathbb {R}^N$$ be a continuous function such that

\begin{aligned} f(x)\notin {\mathcal {A}}_{c}^0(x),\quad \mathrm{for}\, \mathrm{every}\,x\in \partial (K_1\times K_2). \end{aligned}
(6)

Then, there is an $${\bar{x}}=({\bar{x}}_1,{\bar{x}}_2)\in K_1\times K_2$$ for which $$f({\bar{x}})=0$$.

### Proof

It is a straightforward application of Theorem 4, with $$D=K_1\times K_2$$, taking as h the linear transformation defined as the identity $$I_{N_1}$$ on $$\mathbb {R}^{N_1}$$ and its opposite $$-I_{N_2}$$ on $$\mathbb {R}^{N_2}$$. $$\square$$

We will refer to the condition (6) as the avoiding cones condition. To compare it with the “classical” condition (1) in the Poincaré–Miranda theorem, we consider the following example.

### Example 6

Let $$K_1=[a_1,b_1]$$ and $$K_2=[a_2,b_2]$$, so that $$D={\mathcal {R}}$$ is a rectangle in $$\mathbb {R}^2$$. We write $$f(x)=(f_1(x),f_2(x))$$, and, for simplicity, we assume that that $$0\notin f(\partial D)$$. Let us denote by $$x_1$$ a generic point in $$]a_1,b_1[$$ and by $$x_2$$ a generic point in $$]a_2,b_2[$$. The comparison between the directions prohibited by Theorem 1 and those by Corollary 5 is illustrated in Fig. 2 and summarized in the following table:

The same behaviour is observed also in higher dimensions. For a general point $$x\in \partial \mathcal {R}$$ lying on an $$(N-M)$$-dimensional facet of the rectangle, the Poincaré–Miranda theorem requires M inequalities, each on a different component of f(x). For the same point x, our avoiding cones condition requires much less: only if all the other $$N-M$$ components of f(x) are null, then at least one of those M inequalities must be satisfied. This shows that our Corollary 5 also generalizes [16, Theorem 3.4].

Similar considerations also apply to other variants of the Poincaré–Miranda theorem for sets D which are product of balls instead of intervals, as for instance in [21, Corollary 2]. For one of these situations, namely the cylinder, the avoiding cones condition is illustrated in Fig. 5.

There is a second way to describe the difference between the avoiding cones condition (6) and assumption (1) in the Poincaré–Miranda theorem. Whereas the avoiding cones condition requires that f(x) does not lie in $${\mathcal {A}}_{c}(x)$$, the Poincaré–Miranda theorem requires that f(x) actually lies in the polar cone of $${\mathcal {A}}_{c}(x)$$, defined as

besides possibly excluding the trivial case $$f(x)=0$$.

## 4 Truncated convex bodies

The Poincaré–Miranda theorem and many of its generalizations consider a rectangular domain, or at least the product of convex sets. We now want to replace this structural assumption by introducing a new class of sets, which will lead us to some topologically different situations.

Given a convex body D and a set $$F\subseteq \partial D$$, we say that D is truncated in F if there exists a convex body E and a hyperplane H with the following properties (see Fig. 3):

• $$F=D \cap H$$, and H is a supporting hyperplane for the set D;

• $$D=E\cap {\mathcal {H}}_D$$, where $${\mathcal {H}}_D$$ is the closed halfspace bounded by H that includes D;

• the set $$C:=\overline{E{\setminus } {\mathcal {H}}_D}$$ has non-empty interior.

We call E a reconstruction of D with respect to F. Notice that $$C=\overline{E{\setminus } D}$$ is a convex body, which is truncated in F, as well.

As possible examples of truncated convex bodies, we have rectangles, polytopes and cylinders. Balls, on the contrary, are not truncated. Neither, in general, having a $$(N-1)$$-dimensional face is a sufficient condition to be truncated: just consider a square with smoothed angles.

In order to investigate the properties of a possible face F, let us denote by $$\partial ^{N-1} F$$ the boundary of F considered as a subset of H. Moreover, along with normal cones, it is useful to consider also the set-valued analogue of the unit normal vector $$\nu$$: it is the map $${{\varvec{\nu }}}_D$$, from $$\partial D$$ to $${\mathcal {S}}^{N-1}=\{y\in \mathbb {R}^N: \left||y\right||=1\}$$, defined as

\begin{aligned} {{\varvec{\nu }}}_D (x)=\left\{ \frac{y}{\left||y\right||}: y\in {\mathcal {N}}^0_D(x) \right\} . \end{aligned}

Denoting with $${{\mathrm{cone}}}[A]$$ the cone generated by a set A, we then have that

\begin{aligned} {\mathcal {N}}_D(x)={{\mathrm{cone}}}\left[ {{\varvec{\nu }}}_D (x)\right] . \end{aligned}

Since the normal cone $${\mathcal {N}}_D$$ is a map from D to the set of closed, convex subsets of $$\mathbb {R}^N$$, having closed graph, we can see that $${{\varvec{\nu }}}_D$$ is an upper semicontinuous map from $$\partial D$$ to the set of compact subsets of $$\mathbb {R}^N$$ (for an introduction to set-valued maps, we refer to [3]).

We remark that our definition (3) of the normal cone for convex sets is equivalent to setting

\begin{aligned} {\mathcal {N}}_{\mathcal {D}}({\bar{x}})=\left\{ v\in \mathbb {R}^N: \left\langle v,x-{\bar{x}}\right\rangle \le o(\left||x-{\bar{x}}\right||),\quad x\in D\right\} , \end{aligned}
(7)

thus underlining the local nature of the normal cone. This definition is usually adopted to extend the notion of normal cones to non-convex sets (see e.g. [26]).

Given a point $${\bar{x}}\in \partial D$$, to every vector $$v\in {\mathcal {N}}^0_D({\bar{x}})$$, we can associate the supporting hyperplane containing $${\bar{x}}$$,

\begin{aligned} H_v=\{{\bar{x}}+w :\left\langle v,w\right\rangle =0 \}, \end{aligned}

and the corresponding halfspace containing D:

\begin{aligned} {\mathcal {H}}_{v}=\{x\in \mathbb {R}^N :\left\langle v,x-{\bar{x}}\right\rangle \le 0\}. \end{aligned}

Being D a convex body, it coincides with the intersection of its supporting halfspaces [14, Prop. 2, p. 58].

### Proposition 7

If D is a convex body, truncated in F, then

1. (i)

F is closed, convex, and $$F=E\cap H$$;

2. (ii)

F has a non-empty interior if considered as a subset of H;

3. (iii)

if $$x\in \partial ^{N-1} F$$, then $${{\varvec{\nu }}}_D(x)$$ is multivalued.

### Proof

The proof of (i) is immediate, so we start with the proof of (ii). Since D and C are convex bodies, we can find two open balls $$B_D={\mathcal {B}}^N(p_D,\varepsilon )\subseteq D$$ and $$B_C={\mathcal {B}}^N(p_C,\varepsilon )\subseteq C$$ with the same sufficiently small radius $$\varepsilon$$. We observe that H separates $$B_D$$ and $$B_C$$, and so there is a unique point $$p_F$$ in $$H\cap [p_D,p_C]$$, the intersection of H with the segment joining $$p_D$$ and $$p_C$$. We have

\begin{aligned} B^{N-1}_H(p_F,\varepsilon )={\mathcal {B}}^N(p_F,\varepsilon )\cap H \subseteq E \cap H \subseteq F, \end{aligned}

thus showing that F has non-empty interior as a subset of H.

Regarding (iii), it suffices to show that, if $$x\in \partial ^{N-1}F$$, then there exist two different supporting hyperplanes for D intersecting x, which are associated with different unit outer normal vectors. Since $$x\in \partial E$$, there exists a supporting hyperplane $$\widetilde{H}$$ for E, with $$x\in \widetilde{H}$$, implying that $$\widetilde{H}$$ is also a supporting hyperplane for D. On the other hand, we know that H is a supporting hyperplane for D, as well, with $$x\in H$$, and the set $$C=\overline{E{\setminus } {\mathcal {H}}_D}$$ has a non-empty interior, where $${\mathcal {H}}_D$$ is the closed halfspace bounded by H that includes D. So, it has to be that $$H\ne \widetilde{H}$$, thus completing the proof. $$\square$$

An immediate consequence is that smooth convex bodies are not truncated. We can interpret (iii) as the necessity for D to have “edges” on the boundary of F.

We now consider multiple truncations. Given a convex body $$D\subseteq \mathbb {R}^N$$ and a family $$\{F_1,\ldots , F_M\}$$ of pairwise disjoint sets, we say that D is truncated in $$\{F_1,\ldots , F_M\}$$ if there exists a convex body E and some hyperplanes $$H_1,\ldots ,H_M$$ with the following properties:

• $$F_i=D \cap H_i$$, for every i, and $$H_i$$ is a supporting hyperplane for the set D;

• $$D=E\cap {\mathcal {H}}_D^1\cap \dots \cap {\mathcal {H}}_D^M$$, where $${\mathcal {H}}_D^i$$ is the closed halfspace bounded by $$H_i$$ that includes D;

• for every i, the set $$C_i:=\overline{E{\setminus } {\mathcal {H}}_D^i}$$ has non-empty interior.

We call E a reconstruction of D with respect to $$\{F_1,\ldots , F_M\}$$. Notice that each $$C_i$$ is a convex body, which is truncated in $$F_i$$. Moreover, the sets $$C_i$$ are pairwise disjoint, and one has

\begin{aligned} C_1\cup \dots \cup C_M=\overline{E{\setminus } D}. \end{aligned}

### Example 8

(Polygons and polyhedra) In $$\mathbb {R}^2$$, a polygon with faces $$F_j$$ is truncated in $$\{F_1,\ldots , F_M\}$$ if the faces $$F_1,\ldots , F_M$$ are not pairwise adjacent. The simplest way to construct a convex body truncated in M faces is to consider the 2M-agon as truncated on alternate faces.

For polyhedra in $$\mathbb {R}^3$$, we need that the faces where truncations occur do not share any vertices. Thus, the cube can be truncated in at most two (opposite) faces, and so the octahedron, while the icosahedron can be truncated in at most four faces. One way to construct polyhedra truncated in M faces is to consider the prism with a 2M-agonal base as truncated on alternate lateral faces.

## 5 Optimal reconstructions

Let us spend a few words about reconstructions. Clearly, for every truncated convex body D, there are infinitely many possible reconstructions; our plan is to focus on some special reconstructions which are optimal for our purposes. They will indeed minimize the cones $${\mathcal {A}}_{c}(x)$$ to be avoided by the vector field, and hence provide the best choice for the application of the results to be stated in Sect. 6. Some preliminary remarks are in order.

Given $${\bar{x}}\in \partial D$$, we can consider the intersection of all those supporting halfspaces whose boundary contains $${\bar{x}}$$. Using the relationship with the normal cone, we can write this intersection as

(8)

The polar of the normal cone is the so-called tangent cone [7, 26].

In the following, we denote by $${{\mathrm{conv}}}[A]$$ the convex hull of a given set A, that is the smallest convex set including A. The following lemma is a first step towards optimal reconstructions.

### Lemma 9

Let $$D\subseteq \mathbb {R}^N$$ be a convex body truncated in F. Then, there exists a closed convex set $$E_\mathrm {max}$$ such that $$E_\mathrm {max}\cap {\mathcal {H}}_D=D$$, with the property that, if E is any reconstruction of D with respect to F, then $$E\subseteq E_\mathrm {max}$$.

### Proof

If $$x\in \partial D{\setminus } F$$, then, in a sufficiently small neighbourhood of x, the set D coincides with any reconstruction E with respect to F, and hence $${\mathcal {N}}_D(x)={\mathcal {N}}_E(x)$$. This means that E and D have the same supporting hyperplanes containing x and therefore, by (8), we have that . Now, let us set

By what we have just seen, it follows that $$E\subseteq E_\mathrm {max}$$ for every possible reconstruction E. Hence, $$D\subseteq E_\mathrm {max}\cap {\mathcal {H}}_D$$. Furthermore, $$E_\mathrm {max}$$ is a closed convex set since it is the intersection of closed convex sets.

We want to prove that $$E_\mathrm {max}\cap {\mathcal {H}}_D=D$$. First of all, we prove that $$\partial D \subseteq \partial (E_\mathrm {max}\cap {\mathcal {H}}_D)$$. Indeed, each point x of $$\partial D$$ belongs to $$E_\mathrm {max}\cap {\mathcal {H}}_D$$, since $$D\subseteq E_\mathrm {max}\cap {\mathcal {H}}_D$$. If $$x\in \partial D{\setminus } F$$, then there is a supporting hyperplane of $$E_\mathrm {max}$$ containing x; on the other hand, if $$x\in F$$, then $$x\in {\mathcal {H}}_D$$. In any case, there is a supporting hyperplane of $$E_\mathrm {max}\cap {\mathcal {H}}_D$$ containing x, so $$x\in \partial (E_\mathrm {max}\cap {\mathcal {H}}_D)$$.

Suppose now by contradiction that there exists $$y\in E_\mathrm {max}\cap {\mathcal {H}}_D$$ such that $$y\notin D$$. Let $$U={\mathcal {B}}(x_0,r)$$ be an open ball contained in D. By convexity, there exists a unique $${\bar{x}}\in \partial D \cap [x_0, y]$$. It is easy to show that there exists an open neighbourhood V of $${\bar{x}}$$ such that $$V\subseteq {{\mathrm{conv}}}[U\cup \{y\}]\subseteq E_\mathrm {max}\cap {\mathcal {H}}_D$$. Then, $${\bar{x}}\notin \partial (E_\mathrm {max}\cap {\mathcal {H}}_D)$$, contradicting the fact that $${\bar{x}}\in \partial D$$. Thus, $$E_\mathrm {max}\cap {\mathcal {H}}_D=D$$, and the proof is completed. $$\square$$

An immediate consequence of the above lemma is that $$E_\mathrm {max}$$ is the smallest set containing every reconstruction of D with respect to F. More precisely, since the intersection of $$E_\mathrm {max}$$ with any arbitrarily large closed ball containing D is a reconstruction, we deduce that every point of $$E_\mathrm {max}$$ is contained in a reconstruction. So, $$E_\mathrm {max}$$ is the union of all possible reconstructions of D with respect to F.

We say that a reconstruction E is optimal if, for every $$x\in F$$,

\begin{aligned} {\mathcal {N}}_C(x)={\mathcal {N}}_{C_\mathrm {max}}(x),\quad \hbox {where}\quad C_\mathrm {max}=\overline{E_\mathrm {max}{\setminus } D}. \end{aligned}

Since, for every reconstruction, the inclusion $${\mathcal {N}}_C(x)\supseteq {\mathcal {N}}_{C_\mathrm {max}}(x)$$ holds for every $$x\in F$$, an optimal reconstruction minimizes $${\mathcal {N}}_C(x)$$, as illustrated in Fig. 4.

In general, $$E_\mathrm {max}$$ is not bounded and therefore it is not a reconstruction; however, it is always possible to build an optimal reconstruction simply taking $$E=E_\mathrm {max}\cap K$$, where K is a convex body such that $$D\subseteq {{\mathrm{int}}}K$$. Moreover, one can find an optimal reconstruction E which is as close to D as desired. Indeed, given $$\varepsilon >0$$, it suffices to take $$E=E_\mathrm {max}\,\cap \, {\mathcal {B}}[D,\varepsilon ]$$, where

\begin{aligned} {\mathcal {B}}[D,\varepsilon ]=\{x\in \mathbb {R}^N:{{\mathrm{dist}}}(x,D)\le \varepsilon \}, \end{aligned}

to have an optimal reconstruction whose distance from D is at most $$\varepsilon$$.

### Example 10

(Cylinders/prisms) Let $$D=K\times [-1,1]$$, where K is a convex body in $$\mathbb {R}^{N-1}$$. Then, D is truncated in any of its two bases. For instance, we can take $$H=\mathbb {R}^{N-1}\times \{1\}$$, and $$F=K\times \{1\}$$. In this case, we see that $$E_\mathrm {max}=K\times [-1,+\infty )$$, and a possible optimal reconstruction with respect to the face F is given by $$E=D\cup C$$, where $$C=K\times [1,2]$$. Notice that, if instead of C we take, e.g., $$C'=\{(x,y+1) :x\in K, 0\le y \le {{\mathrm{dist}}}(x,\partial K) \}$$, it is true that we have a reconstruction, but it is not optimal.

### Example 11

(Polytopes) If D is a convex polytope with faces $$F_1,\ldots ,F_m$$, it is truncated with respect to any of them. Let us focus on a particular one, $$F=F_j$$. Correspondingly, we will have

\begin{aligned} E_{j,\mathrm {max}}=\bigcap _{\begin{array}{c} i=1 \\ i\ne j \end{array}}^m {\mathcal {H}}_D^i, \end{aligned}

where $${\mathcal {H}}_D^i$$ denotes the halfspace including D bounded by the supporting hyperplane $$H_i$$ generated by the face $$F_i$$. If $$E_{j,\mathrm {max}}$$ is bounded, then it is an optimal reconstruction.

In the case of a convex body D truncated at $$\{F_1,\ldots , F_M\}$$, we say that a reconstruction $$E=D\cup C_1 \cup \dots \cup C_M$$ is optimal if, for every truncation $$F_i$$, the reconstruction $$E_i=D\cup C_i$$ is optimal.

## 6 Main results

Let $$D \subseteq \mathbb {R}^N$$ be a convex body truncated in $$\{F_1,\ldots , F_M\}$$, with an optimal reconstruction $$E=D\cup C_1\cup \dots \cup C_M$$. We define the map $${\mathcal {A}}_{c}$$, from $$\partial D$$ to the closed, convex cones of $$\mathbb {R}^N$$, as

\begin{aligned} {\mathcal {A}}_{c}(x)={\left\{ \begin{array}{ll} {\mathcal {N}}_{C_i}(x), &{}\text {if }x\in F_i,\\ {\mathcal {N}}_D(x), &{}\text {if }x\in \partial D{\setminus } \bigcup _{i=1}^M F_i. \end{array}\right. } \end{aligned}

Moreover, as usual, we denote by $${\mathcal {A}}_{c}^0(x)$$ the set $${\mathcal {A}}_{c}(x)$$ deprived of the origin. We now state the main theorem of this paper.

### Theorem 12

Let $$D \subseteq \mathbb {R}^N$$ be a convex body truncated in $$\{F_1,\ldots , F_M\}$$, with $$M\ge 2$$, and let $$f:D\rightarrow \mathbb {R}^N$$ be a continuous function satisfying

\begin{aligned} f(x)\notin {\mathcal {A}}_{c}^0 (x),\quad \mathrm{for}\,\mathrm{every }\,x\in \partial D. \end{aligned}

Then, there is an $${\bar{x}}\in D$$ such that $$f({\bar{x}})=0$$. Furthermore, if $$0\notin f(\partial D)$$, then

\begin{aligned} \deg (f,D)=(-1)^N(1-M). \end{aligned}

The proof of Theorem 12 will be given in Sect. 8. We now provide some examples where it can be applied.

### Example 13

(Cylinders/prisms) Let $$D=K\times [-1,1]$$, where $$K\subseteq \mathbb {R}^{N-1}$$ is a convex body. The set D is truncated in $$F_-=K\times \{-1\}$$ and $$F_+=K\times \{1\}$$, and we have

\begin{aligned} {\mathcal {A}}_{c}(x)= {\left\{ \begin{array}{ll} {\mathcal {N}}_D(x), &{}\text {if }x\in \partial K\times \,]-1,1[,\\ -{\mathcal {N}}_D(x), &{}\text {if }x\in {{\mathrm{int}}}\, K\times \{-1,1\},\\ {\mathcal {N}}_K(y)\times \,]-\infty ,0], &{}\text {if }x=(y,1), \text {with }y\in \partial K,\\ {\mathcal {N}}_K(y)\times [0,+\infty [, &{}\text {if }x=(y,-1), \text {with }y\in \partial K. \end{array}\right. } \end{aligned}

These cones are illustrated for the three-dimensional case in Fig. 5, where K is a circle in $$\mathbb {R}^2$$. We remark that, in the case of cylinders, Theorem 12 coincides with Corollary 5.

### Example 14

(Polytopes) Let the convex polytope D, with faces $$F_j$$, be truncated in $$\{F_1,\ldots , F_M\}$$. For every $$x\in \partial D$$, we denote by $$I(x)=\{i:x\in F_i\}$$ the set of indices of those faces containing x, and by $$\nu _i$$ the outward unit vector normal to $$F_i$$. Furthermore, we denote by $$\sigma (i)$$ the sign of the avoiding cones condition in $$F_i$$, namely

\begin{aligned} \sigma (i)={\left\{ \begin{array}{ll} -1, &{}\text {if}\,i=1,\ldots ,M\, \text {(avoiding inner normal cones)},\\ +1, &{}\text {otherwise (avoiding outer normal cones)}. \end{array}\right. } \end{aligned}

Then, $${\mathcal {A}}_{c}(x)$$ corresponds to the convex cone generated by the set

\begin{aligned} \{\sigma (i) \nu _i: i\in I(x)\}, \end{aligned}

whose elements are the outer/inner normal cones assigned by $${\mathcal {A}}_{c}$$ to the points in the interior of the faces containing x. We illustrate in Fig. 6 the particular case of an hexagon truncated in three alternate faces. We highlight that, being in this case $$N=2$$ and $$M=3$$, if f satisfies the avoiding cones condition of Theorem 12, then

\begin{aligned} \deg (f,D)=(-1)^2(1-3)=-2. \end{aligned}

We finally notice that Theorem 3 can be interpreted as a version of Theorem 12, with $$M=0$$. So, having Theorem 4 in mind, we can also write the following extension of Theorem 12.

### Theorem 15

Let $$h:\mathbb {R}^N\rightarrow \mathbb {R}^N$$ be a homeomorphism, such that $$h(0)=0$$, and assume that $$D\subseteq \mathbb {R}^N$$ is a convex body truncated in $$\{F_1,\ldots , F_M\}$$, with $$M\ge 2$$. Let $$f:D\rightarrow \mathbb {R}^N$$ be a continuous function such that

\begin{aligned} f(x)\notin h({\mathcal {A}}_{c}^0 (x)),\quad \mathrm{for}\,\mathrm{every}\,x\in \partial D. \end{aligned}

Then, there is an $${\bar{x}}\in D$$ such that $$f({\bar{x}})=0$$.

Until now, the domain D of our functions has been supposed to be a convex body. However, all our results can be easily extended to sets $${\mathcal {D}}$$ which are just diffeomorphic to a convex body D. By this, we mean that there are two open sets A, B in $$\mathbb {R}^N$$, with $$D\subseteq A$$, $${\mathcal {D}}\subseteq B$$, and a diffeomorphism $$\varphi :A\rightarrow B$$, such that

\begin{aligned} {\mathcal {D}}=\varphi (D). \end{aligned}

To define the normal cone to $${\mathcal {D}}$$ at a boundary point $$y\in \partial {\mathcal {D}}$$, let $$\psi =\varphi ^{-1}:B\rightarrow A$$, so that $$\psi (y)\in \partial D$$, and set

\begin{aligned} {\mathcal {N}}_{\mathcal {D}}(y)=(\psi '(y))^T{\mathcal {N}}_D(\psi (y)). \end{aligned}

We remark that this choice preserves the extended notion of normal cone recalled in (7).

Let us first go back to our variant of the Poincaré–Bohl theorem. Writing, as usual, $${\mathcal {N} }_{\mathcal {D}}^0(y)={\mathcal {N}}_{\mathcal {D}}(y){\setminus }\{0\}$$, we have the following extension of Theorem 3.

### Theorem 16

Assume that $${\mathcal {D}}$$ is a subset of $$\mathbb {R}^N$$, diffeomorphic to a convex body. Let $$f :\mathcal{D}\rightarrow \mathbb {R}^N$$ be a continuous function such that

\begin{aligned} f(y)\not \in {\mathcal {N}}_{\mathcal {D}}^0(y),\quad \mathrm{for}\,\mathrm{every}\,y\in \partial {\mathcal {D}}. \end{aligned}
(9)

Then, there is a $$\bar{y}\in {\mathcal {D}}$$ such that $$f(\bar{y})=0$$.

### Proof

Using the above notation, we have $$D=\psi ({\mathcal {D}})$$ and we define $$\tilde{f} :D\rightarrow \mathbb {R}^N$$ as

\begin{aligned} {\tilde{f}}(x)=(\varphi '(x))^Tf(\varphi (x)). \end{aligned}

Then, condition (4) holds replacing f by $${\tilde{f}}$$, so Theorem 3 applies, and we easily conclude. $$\square$$

In Fig. 7, we illustrate the avoiding cones condition of Theorem 16, in the case when $${\mathcal {D}}$$ has a smooth boundary.

Now, in order to extend Theorem 12, let us consider a set $${\mathcal {D}}$$ which is diffeomorphic to a convex body D, truncated in $$\{F_1,\ldots ,F_M\}$$. Since $$D\subseteq A$$, $${\mathcal {D}}\subseteq B$$, and both sets A an B are open, we can choose a reconstruction E of D with respect to $$\{F_1,\ldots ,F_M\}$$, even an optimal reconstruction, to be contained in A, as well. Setting

\begin{aligned} {\mathcal {E}}=\varphi (E),{\mathcal {F}}_1=\varphi (F_1),\ldots ,{\mathcal {F}}_M=\varphi (F_M), \end{aligned}

we say that $${\mathcal {E}}$$ is a reconstruction of $${\mathcal {D}}$$ with respect to $$\{{\mathcal {F}}_1,\ldots ,{\mathcal {F}}_M\}$$. We also say that $${\mathcal {D}}$$ is truncated in $$\{{\mathcal {F}}_1,\ldots , {\mathcal {F}}_M\}$$. Then, referring to the notation introduced in Sect. 4, we have $$E=D\cup C_1\cup \dots \cup C_M$$, and setting $${\mathcal {C}}_1=\varphi (C_1),\ldots ,{\mathcal {C}}_M=\varphi (C_M)$$, we can define the cones

\begin{aligned} {\mathcal {A}}_{c}(x)={\left\{ \begin{array}{ll} {\mathcal {N}}_{{\mathcal {C}}_i}(y), &{}\text {if }y\in {\mathcal {F}}_i,\\ {\mathcal {N}}_{\mathcal {D}}(y), &{}\text {if }y\in \partial {\mathcal {D}}{\setminus } \bigcup _{i=1}^M {\mathcal {F}}_i. \end{array}\right. } \end{aligned}

Writing, as usual, $${\mathcal {A}}_{c}^0(x)={\mathcal {A}}_{c}(x){\setminus }\{0\}$$, we can state the following.

### Theorem 17

Let $${\mathcal {D}} \subseteq \mathbb {R}^N$$, diffeomorphic to a convex body, be truncated in $$\{{\mathcal {F}}_1,\ldots , {\mathcal {F}}_M\}$$, with $$M\ge 2$$, and let $$f:{\mathcal {D}}\rightarrow \mathbb {R}^N$$ be a continuous function satisfying

\begin{aligned} f(y)\notin {\mathcal {A}}_{c}^0 (y),\quad \mathrm{for}\,\mathrm{every}\,y\in \partial {\mathcal {D}}. \end{aligned}

Then, there is a $$\bar{y}\in {\mathcal {D}}$$ such that $$f(\bar{y})=0$$.

An example of the avoiding cones condition of Theorem 17 is illustrated in Fig. 8, where the set $$\mathcal {D}$$ is diffeomorphic to a hexagon D (cf. Fig. 6).

We end this section with the analogue of Theorem 15.

### Theorem 18

Let $$h:\mathbb {R}^N\rightarrow \mathbb {R}^N$$ be a homeomorphism, such that $$h(0)=0$$, assume that $${\mathcal {D}} \subseteq \mathbb {R}^N$$, diffeomorphic to a convex body, is truncated in $$\{{\mathcal {F}}_1,\ldots , {\mathcal {F}}_M\}$$, with $$M\ge 2$$, and let $$f:{\mathcal {D}}\rightarrow \mathbb {R}^N$$ be a continuous function satisfying

\begin{aligned} f(y)\notin h( {\mathcal {A}}_{c}^0 (y)),\quad \mathrm{for}\,\mathrm{every}\,y\in \partial {\mathcal {D}}. \end{aligned}

Then, there is a $$\bar{y}\in {\mathcal {D}}$$ such that $$f(\bar{y})=0$$.

## 7 An application to multi-saddles

In this section, we will show that our results can be applied to deal with the gradient of a potential V having degenerate multi-saddle points, where multiple expansive and contractive directions appear.

A detailed exposition for the planar case can be found in [10], where the authors considered k-fold saddles formed by the alternation, around the critical point, of $$k+1$$ ascending directions and $$k+1$$ descending directions: the first ones identified by trajectories of the flow of $$\nabla V$$ escaping from the critical point, while the second ones by trajectories converging to the critical point. (For a similar situation, see also [1, 11, 12].) With this description, the standard non-degenerate saddle is an example of onefold saddle, whereas the monkey saddle is a twofold saddle (cf. Fig. 9).

In higher dimensions, the criterion of alternation is no longer applicable and more sophisticated situations may arise. Different approaches, mainly related to the Conley index or some of its generalizations, have been used to study the degree in the case of higher-dimensional multiple saddle points (cf. [9, 27, 30]). We propose here a simpler strategy, based on our Theorem 12, to recover some of those results.

We consider a continuously differentiable function $$V:{\mathcal {B}}[0,R]\rightarrow \mathbb {R}$$, and assume that, near the boundary of the domain, namely for $$0<r\le \left||x\right||\le R$$, it can be written in the form

\begin{aligned} V(x)=\rho (\left||x\right||)S\left( \frac{x}{\left||x\right||}\right) , \end{aligned}
(10)

where $$\rho :[r,R]\rightarrow \,]0,+\infty [$$ and $$S:{\mathcal {S}}^{N-1}\rightarrow \mathbb {R}$$ are continuously differentiable functions, and $$\rho '(\xi )>0$$, for every $$\xi \in [r,R]$$. This factorization, in a certain sense, generalizes the idea of positive homogeneity, which corresponds to the choice $$\rho (t)=t^\alpha$$, for a certain $$\alpha >0$$.

In this region of the domain, $$\nabla V(x)$$ can be decomposed in radial and tangential components as

\begin{aligned} \nabla V(x)=\rho '(\left||x\right||)\frac{x}{\left||x\right||}S\left( \frac{x}{\left||x\right||}\right) + \frac{\rho (\left||x\right||)}{\left||x\right||} \nabla _{\mathcal {S}}S\left( \frac{x}{\left||x\right|| }\right) , \end{aligned}
(11)

where $$\nabla _{\mathcal {S}}S(y)$$ denotes the tangential gradient of S(y), i.e., for every $$y\in {\mathcal {S}}^{N-1}$$,

\begin{aligned} \nabla _{\mathcal {S}}S(y)=\nabla S(y)- \left\langle \nabla S(y),y\right\rangle y\,. \end{aligned}

We see that $$\nabla _{\mathcal {S}}S$$ corresponds to the surface gradient on the unit sphere of the function $$x\mapsto S(x/\left||x\right||)$$, defined on $$\mathbb {R}^N{\setminus } \{0\}$$.

Since in (11) the two terms in the sum are orthogonal, their sum vanishes if and only if they are both zero. Hence, if $$r\le \left||x\right||\le R$$, we have that

\begin{aligned} \nabla V(x)=0\quad \Leftrightarrow \quad S\left( \frac{x}{\left||x\right||}\right) =0 \quad \hbox {and}\quad \nabla _{\mathcal {S}}S\left( \frac{x}{\left||x\right||}\right) =0. \end{aligned}

In particular, if we want the degree $$\deg (\nabla V,{\mathcal {B}}[0,R])$$ to be well defined, we need to ask that S(x) and $$\nabla _{\mathcal {S}}S(x)$$ do not vanish simultaneously at any $$x\in {\mathcal {S}}^{N-1}$$.

Let us state the main result of this section.

### Theorem 19

In the above setting, assume that

1. (i)

if $$x\in {\mathcal {S}}^{N-1}$$ satisfies $$\nabla _{\mathcal {S}}S(x)=0$$, then $$S(x)\ne 0$$;

2. (ii)

the set $$\{x\in {\mathcal {S}}^{N-1}:S(x)\ge 0\}$$ is the union of M disjoint subsets, which are diffeomorphic to an $$(N-1)$$-dimensional ball.

Then, $$\deg (\nabla V, {\mathcal {B}}[0,R])=(-1)^{N}(1-M)$$.

### Proof

If $$M=0$$, we have that $$S(x)<0$$ for every $$x\in {\mathcal {S}}^{N-1}$$. Taking $$D={\mathcal {B}}[0,R]$$, we have that, for every $$x\in \partial D$$, the cone to avoid is $${\mathcal {N}}_D(x)=\{\lambda x :\lambda \ge 0\}$$. Being $$V(x)<0$$, the radial component of $$\nabla V(x)$$ is not zero and points inward, so $$\nabla V(x)\notin {\mathcal {N}}_D(x)$$. Theorem 3 can then be applied to conclude.

So, from now on, we can assume $$M\ge 1$$. Given a vector $$y\in {\mathcal {S}}^{N-1}$$, for every $$x\in {\mathcal {S}}^{N-1}$$ such that $$x\ne \pm y$$, we define

\begin{aligned} \sigma (x;y)=\frac{y-x-\left\langle x,y-x\right\rangle x}{\left||y-x-\left\langle x,y-x\right\rangle x\right||}. \end{aligned}

It is the unit vector on the tangent space to $${\mathcal {S}}^{N-1}$$ in x, associated with the shortest path from x to y. We say that a local maximum point $$y\in {\mathcal {S}}^{N-1}$$ for S is regular if there exists a neighbourhood U of y, with the property that

\begin{aligned} \left\langle \sigma (x;y),\nabla _{\mathcal {S}}S (x)\right\rangle > 0,\quad \hbox {for every }x\in U\cap {\mathcal {S}}^{N-1}. \end{aligned}

This condition is true, for instance, if y is a non-degenerate local maximum point. We first prove the theorem when (ii) is replaced by the following stronger assumption:

(ii$$^*$$):

if $$y\in {\mathcal {S}}^{N-1}$$ satisfies $$\nabla _{\mathcal {S}}S(y)=0$$ and $$S(y)\ge 0$$, then y is a regular local maximum point for S, and $$S(y)>0$$. Moreover, there are exactly M of such points.

Let $$s_1,\ldots ,s_M$$ be the regular maximum points of condition (ii$$^{*}$$). For any $$\varepsilon \in \,]0,R-r[$$, we set

\begin{aligned} {\mathcal {H}}_i=\{x\in \mathbb {R}^N :\left\langle x,s_i\right\rangle \le R-\varepsilon \}. \end{aligned}

Let

\begin{aligned} D={\mathcal {B}}[0,R]\cap {\mathcal {H}}_1 \cap {\mathcal {H}}_2 \cap \dots \cap {\mathcal {H}}_M, \end{aligned}

and define $$H_i=\partial {\mathcal {H}}_i$$. If $$\varepsilon$$ is sufficiently small, then D is a convex body truncated in $$\{F_1,\ldots F_M\}$$, with $$F_i={\mathcal {B}}[0,R]\cup H_i$$, and $$E={\mathcal {B}}[0,R]$$ is an optimal reconstruction. Let us verify that the avoiding cones condition holds, provided that $$\varepsilon$$ is sufficiently small.

For $$x\in \partial D{\setminus } \bigcup _{i=1}^M F_i$$, the cone to avoid is $${\mathcal {A}}_{c}(x)={\mathcal {N}}_D(x)=\{\lambda x :\lambda \ge 0\}$$. If $$V(x)<0$$, then the radial component of $$\nabla V(x)$$ is not zero and points inward, so $$\nabla V(x)\notin {\mathcal {A}}_{c}(x)$$. If $$V(x)\ge 0$$, since $$x\ne R s_i$$ for each $$i=1,\ldots , M$$, the tangential component of $$\nabla V(x)$$ is not zero and so $$\nabla V(x)\notin {\mathcal {A}}_{c}(x)$$.

If $$x\in {{\mathrm{int}}}_{\partial D} F_i$$, for some $$i=1,\ldots ,M$$, then $${\mathcal {A}}_{c}(x)=\{-\lambda s_i :\lambda \ge 0\}$$. (Note that $${{\mathrm{int}}}_{\partial D} F_i$$ is a $$(N-1)$$-dimensional ball of radius $$\sqrt{\varepsilon (2R-\varepsilon )}$$ centred in $$(R-\varepsilon )s_i$$.) Since $$\nabla V((R-\varepsilon )s_i)=\lambda _i s_i$$, for some $$\lambda _i>0$$, if $$\varepsilon$$ is sufficiently small, by continuity, it has to be $$\nabla V (x)\notin {\mathcal {A}}_{c}(x)$$.

If $$x\in \partial ^{N-1} F_i$$, the boundary with respect to $$H_i$$, for some $$i=1,\ldots ,M$$, then $${\mathcal {A}}_{c}(x)$$ is the convex cone generated by $$\{-s_i, x\}$$. By definition, we have that $$\left\langle \sigma (x;s_i),x\right\rangle =0$$ and, if $$\left||x-Rs_i\right||\le \sqrt{2}R$$, then also $$\left\langle \sigma (x;s_i),-s_i\right\rangle \le 0$$. Thus, if $$\varepsilon$$ is sufficiently small, $$\left\langle \sigma (x;s_i),v\right\rangle \le 0$$ for every $$v\in {\mathcal {A}}_{c}(x)$$. On the other hand, since $$s_i$$ is a regular maximum point, taking $$\varepsilon$$ sufficiently small we get

\begin{aligned} \left\langle \sigma (x;s_i),\nabla V(x)\right\rangle =\left\langle \sigma (x;s_i),\frac{\rho (\left||x\right||)}{\left||x\right||} \nabla _{\mathcal {S}}S\left( \frac{x}{\left||x\right||}\right) \right\rangle >0, \end{aligned}

so that $$\nabla V(x)\notin {\mathcal {A}}_{c}(x)$$.

So, in all cases, we have that $$\nabla V(x)\notin {\mathcal {A}}_{c}(x)$$. Then, by Theorem 12, $$\deg (\nabla V,D)=(-1)^N(1-M)$$. Since there are no critical points of V in $${\mathcal {B}}[0,R]{\setminus } D$$, the excision property of the degree leads us to the end of the proof, in this case.

Let us now consider the general case. We write

\begin{aligned} \{x\in {\mathcal {S}}^{N-1}:S(x)\ge 0\}=\varSigma _1\cup \dots \cup \varSigma _M, \end{aligned}

and assume that, for every $$i=1,\ldots ,M$$, there are an open set $$U_i\subseteq {\mathcal {S}}^{N-1}$$ containing $$\varSigma _i$$, an open set $$V_i$$ containing $${\mathcal {B}}^{N-1}[0,1]$$ and a diffeomorphism $$\psi _i:U_i\rightarrow V_i$$, such that $$\psi _i(\varSigma _i)={\mathcal {B}}^{N-1}[0,1]$$; moreover, the sets $$U_i$$ can be assumed pairwise disjoint.

Define $$P_i:U_i \rightarrow \mathbb {R}$$ as

\begin{aligned} P_i(x)=1-\Vert \psi _i(x)\Vert ^2. \end{aligned}

Then, for every $$x\in \partial \varSigma _i$$, there is a $$\lambda _i(x)>0$$ for which $$\nabla _{\mathcal {S}}S(x)=\lambda _i(x)\nabla _{\mathcal {S}}P_i(x)$$. Hence, for $$\delta >0$$ sufficiently small, $${\mathcal {B}}^{N-1}[0,1+\delta ]\subseteq V_i$$ and, writing $$U_i^{\delta }=\psi _i^{-1}({\mathcal {B}}^{N-1}[0,1+\delta ])$$, we have that $$\varSigma _i\subseteq U_i^\delta \subseteq U_i$$. Furthermore, for $$\delta$$ sufficiently small, we have also

\begin{aligned} \langle \nabla _{\mathcal {S}}S(x),\nabla _{\mathcal {S}}P_i(x)\rangle >0,\quad \hbox {for every}\,x\in U_i^\delta {\setminus }\varSigma _i. \end{aligned}

Let $$\mu :\mathbb {R}\rightarrow \mathbb {R}$$ be an increasing continuously differentiable function such that

\begin{aligned} \mu (s)=\left\{ \begin{array}{ll} 0, &{}\quad \mathrm{if}\,s\le 0,\\ 1, &{}\quad \mathrm{if}\,s\ge \delta , \end{array} \right. \qquad \mu '(0)=\mu '(\delta )=0. \end{aligned}

Define $$W:{\mathcal {S}}^{N-1} \times [0,1]\rightarrow \mathbb {R}$$ as follows:

\begin{aligned} W(x,\lambda )=\left\{ \begin{array}{l} \displaystyle \left[ 1-\mu \Big ({{\mathrm{dist}}}\left( \psi _i(x),{\mathcal {B}}^{N-1}[0,1]\right) \Big )\right] \Big (\lambda P_i(x)+(1-\lambda )S(x)\Big )+\\ \begin{array}{ll} \displaystyle \quad +\mu \Big ({{\mathrm{dist}}}\left( \psi _i(x),{\mathcal {B}}^{N-1}[0,1]\right) \Big )S(x), &{}\quad \hbox {if}\,x\in U_i^\delta , \hbox {for some } i, \\ S(x), &{}\quad \hbox {otherwise}. \end{array} \end{array} \right. \end{aligned}

This function is continuously differentiable and transforms $$S(x)=W(x,0)$$ into a function $$\widetilde{S}(x)=W(x,1)$$, satisfying (ii$$^{*}$$). Moreover, the following two additional properties hold:

• the sign of $$W(x,\lambda )$$ does not depend on $$\lambda$$;

• the functions $$W(\cdot ,\lambda )$$ have no critical points y with $$W(y,\lambda )=0$$.

Such a function W induces an admissible homotopy $$H:{\mathcal {B}}[0,R]\times [0,1]\rightarrow \mathbb {R}^N$$, defined as

\begin{aligned} H(x,\lambda )=\nabla \left[ \rho (\left||x\right||)\,W\left( \frac{x}{\left||x\right||},\lambda \right) \right] , \end{aligned}

which transforms $$\nabla V(x)=H(x,0)$$ into $$\nabla \widetilde{V}(x)$$, where $$\widetilde{V}(x)$$ satisfies the assumptions of the theorem, and also the additional condition (ii$$^{*}$$). Since the admissible homotopy preserves the degree, the proof is completed. $$\square$$

The following symmetrical version of Theorem 19 holds.

### Theorem 20

Let the assumptions of Theorem 19 hold, with only (ii) replaced by

(ii$$^-$$):

the set $$\{x\in {\mathcal {S}}^{N-1}:S(x)\le 0\}$$ is the union of M disjoint subsets, which are diffeomorphic to an $$(N-1)$$-dimensional ball.

Then, $$\deg (\nabla V, {\mathcal {B}}[0,R])=1-M$$.

### Proof

It is sufficient to apply Theorem 19 to $$-V$$ instead of V. $$\square$$

The above result should be compared with [30, Theorem 4.4], which is stated in a more general setting. We also notice that, when $$M=0$$, Theorem 20 is related to a result by Krasnosel’skii [17] (see also [2]) stating that, when V is coercive, then, for R large enough, $$\deg (\nabla V, {\mathcal {B}}[0,R])=1$$.

In the planar case, conditions (ii) and (ii$$^-$$) can be simplified, as follows.

### Corollary 21

Let the assumptions of Theorem 19 hold, for $$N=2$$, with only (ii) replaced by

(ii$$_2$$):

the function S changes sign exactly 2M times on $${\mathcal {S}}^{1}$$.

Then, $$\deg (\nabla V, {\mathcal {B}}[0,R])=1-M$$.

### Proof

Since the zeros of S are simple, the set $$\{x\in {\mathcal {S}}^{1}:S(x)\ge 0\}$$ is the union of M disjoint arcs, each of which is diffeomorphic to a compact interval of $$\mathbb {R}$$. $$\square$$

We have thus recovered, in the planar case, a variant of the alternation criterion described in [10]. We now give two simple examples where our results directly apply. The first one deals with a planar situation.

### Example 22

Let us consider, for a positive integer k, the family of potentials

\begin{aligned} S_k(s)=\cos [(k+1)s], \end{aligned}

where $$s\in [0, 2\pi [$$ is the angle which determines a point $$x\in {\mathcal {S}}^{1}$$. Taking $$\rho _k(t)=t^{k+1}$$ and identifying $$\mathbb {R}^2$$ with the complex plane, we get

\begin{aligned} V_k(z)=\rho _k(\left|z\right|)S_k(\arg z)=\mathfrak {R}\left( z^{k+1}\right) . \end{aligned}

The saddle generated by $$S_k$$ has $$k+1$$ ascending directions at the points of maximum for $$S_k$$, namely $$s=2j\pi /(k+1)$$, with $$j=0,1,\ldots ,k$$, and $$k+1$$ descending directions at the points of minimum for $$S_k$$, i.e. $$s=(2j+1)\pi /(k+1)$$. We thus see that this choice of $$S_k$$ produces a model of k-fold saddle for every $$k\ge 1$$. In this case, $$\deg (\nabla V_k,{\mathcal {B}}[0,R])=-k$$, for any $$R>0$$.

As we said above, our main purpose is to study also non-planar situations. In our second example, we show an illustrative application in $$\mathbb {R}^3$$.

### Example 23

Let $$v_1,v_2,v_3,v_4$$ be the vertices of a tetrahedron centred in the origin, namely

\begin{aligned} v_1= & {} \left( 0,0,\frac{\sqrt{6}}{4}\right) ,\quad v_2=\left( -\frac{\sqrt{3}}{6},-\frac{1}{2},-\frac{\sqrt{6}}{12}\right) ,\\ v_3= & {} \left( -\frac{\sqrt{3}}{6},\frac{1}{2},-\frac{\sqrt{6}}{12}\right) ,\quad v_4=\left( \frac{\sqrt{3}}{3},0,-\frac{\sqrt{6}}{12}\right) . \end{aligned}

Let us consider the functions $$V_a,V_b:\mathbb {R}^3\rightarrow \mathbb {R}$$, defined as

\begin{aligned} V_a(x)&=\left||x\right||^2\left[ \frac{1}{5}-\min _{i=1,\ldots ,4}{{\mathrm{dist}}}\left( \frac{x}{\left||x\right||},v_i\right) ^2\right] ,\\ V_b(x)&=\prod _{i=1}^4 \left\langle x,v_i\right\rangle -\frac{\left||x\right||^4}{150}. \end{aligned}

Both potentials admit the factorization (10), since they are positively homogeneous of degree two and four, respectively. The behaviour of their spherical components $$S_a(x)$$ and $$S_b(x)$$ is illustrated in Fig. 10.

The potential $$S_a$$ has four positive maximum points, placed in correspondence of the vertices of the tetrahedron, four negative minima, in correspondence of the centres of the faces of the tetrahedron, and six negative saddle points, in correspondence of the midpoints of the edges of the tetrahedron.

The potential $$S_b$$ instead has six positive maximum points, placed in correspondence of the midpoints of the edges of the tetrahedron, defining in this way the vertices of an octahedron. It also has eight negative minima, in correspondence of both the vertices and the centres of the faces of the tetrahedron (i.e. the centres of the faces of the octahedron), and twelve negative saddle points, corresponding to the midpoints of the edges of the octahedron.

Moreover, we observe that both $$V_a$$ and $$V_b$$ satisfy the hypotheses of Theorem 19, with $$M_a=4$$ and $$M_b=6$$, respectively, so that, for every $$R>0$$, we have

\begin{aligned}&\deg (\nabla V_a,{\mathcal {B}}[0,R])=(-1)^3(1-M_a)=3, \\&\deg (\nabla V_b,{\mathcal {B}}[0,R])=(-1)^3(1-M_b)=5. \end{aligned}

## 8 Proof of Theorem 12

In this section, in order to provide a proof for Theorem 12, we will need some basic facts from the theory of set-valued maps, for which we refer to the book of Aubin and Cellina [3].

Let us start showing that if D is a convex body, then, for every $$x\in D$$,

\begin{aligned} v\in {\mathcal {N}}^0_D(x)\quad \Rightarrow \quad -v\notin {\mathcal {N}}^0_D(x). \end{aligned}

Indeed, if on the contrary both v and $$-v$$ belong to $${\mathcal {N}}^0_D(x)$$, then, for every $$x\in D$$, it would be

\begin{aligned} 0 \ge \left\langle v,x-{\bar{x}}\right\rangle =-\left\langle -v,x-{\bar{x}}\right\rangle \ge 0. \end{aligned}

Hence, D would be included in a hyperplane orthogonal to v and so it would have empty interior, in contradiction with the assumption of being a convex body.

The following lemma will be crucial for the proof of Theorem 12.

### Lemma 24

Let $$D\subseteq \mathbb {R}^N$$ be a convex body truncated in F, and $$E=D\cup C$$ be a reconstruction of D with respect to F. If $$f:D\rightarrow \mathbb {R}^N$$ is a continuous map such that $$f(x)\notin {\mathcal {N}}_C(x)$$, for every $$x\in F$$, then f can be extended to a continuous function $${\hat{f}}:E\rightarrow \mathbb {R}^N$$, such that

\begin{aligned} {\hat{f}}(x)\notin {\mathcal {N}}_C(x),\quad \mathrm{for}\,\mathrm{every}\,x\in \partial C. \end{aligned}

### Proof

The core of the proof is to show the existence of a map $$\widehat{{\mathcal {N}}}_C$$ from $$\partial C$$ to the closed, convex cones of $$\mathbb {R}^N$$, with closed graph and such that

1. (N1)

for every $$x\in \partial C$$, $${\mathcal {N}}_C(x)\subseteq \widehat{{\mathcal {N}}}_C(x)$$ and

\begin{aligned} v\in \widehat{{\mathcal {N}}}_C(x){\setminus }\{0\}\quad \Rightarrow \quad -v\notin \widehat{{\mathcal {N}}}_C(x)\,; \end{aligned}
2. (N2)

$$\widehat{{\mathcal {N}}}_C$$ admits a continuous selection $$\alpha :\partial C \rightarrow \mathbb {R}^N$$ such that

\begin{aligned} \alpha (x)\in \widehat{{\mathcal {N}}}_C(x)\setminus \{0\},\quad \hbox { for every }x\in \partial C\,; \end{aligned}
3. (N3)

$$f(x)\notin \widehat{{\mathcal {N}}}_C(x)$$, for every $$x\in F$$.

Step 1 Let us define the set-valued map $$\varPhi$$ from $$\partial C$$ to $$\mathbb {R}^N$$ as

\begin{aligned} \varPhi (x)={{\mathrm{conv}}}\left[ {{\varvec{\nu }}}_C(x)\right] . \end{aligned}

Its values are convex and compact. Let us show that $$\varPhi$$ is upper semicontinuous. To do so, we first observe that, for a compact convex set $$K\subseteq \mathbb {R}^N$$, the $$\varepsilon$$-neighbourhood $${\mathcal {B}}(K,\varepsilon )$$ is convex because of the convexity of the Euclidean distance. Now, take $$x\in \partial C$$ and fix $$\varepsilon >0$$. Since $${{\varvec{\nu }}}_C$$ is upper semicontinuous and $${\mathcal {B}}(\varPhi (x),\varepsilon )$$ is a neighbourhood of $${{\varvec{\nu }}}_C(x)$$, there exists a neighbourhood U of x in $$\partial C$$ such that $${{\varvec{\nu }}}_C(U)\subseteq {\mathcal {B}}(\varPhi (x),\varepsilon )$$. From the convexity of $${\mathcal {B}}(\varPhi (x),\varepsilon )$$, it follows that $$\varPhi (U)\subseteq {\mathcal {B}}(\varPhi (x),\varepsilon )$$. The upper semicontinuity of $$\varPhi$$ is thus proved.

Since $$\varPhi (x)\subseteq {\mathcal {N}}_C(x)$$, we have that

\begin{aligned} v\in \varPhi (x){\setminus }\{0\}\quad \Rightarrow \quad -v\notin \varPhi (x). \end{aligned}

Let us now prove that $$0\notin \varPhi (\partial C)$$. Suppose by contradiction that $$0\in \varPhi (x)$$ for some $$x\in \partial C$$; then there exist $$v_1,\ldots , v_k\in {{\varvec{\nu }}}_C(x)$$ and $$\lambda _1,\ldots , \lambda _k$$ in ]0, 1[, with $$\lambda _1+\dots +\lambda _k=1$$, such that

\begin{aligned} 0=\sum _{i=1}^k\lambda _iv_i=\lambda _1 v_1 + (1-\lambda _1){\tilde{v}}, \quad \text {with} \quad {\tilde{v}}=\sum _{i=2}^k \frac{\lambda _iv_i}{1-\lambda _1}\in \varPhi (x). \end{aligned}

Let us set $$\mu =\min \{\lambda _1/2,(1-\lambda _1)/2\}$$; then,

\begin{aligned} 0\ne w_1= & {} (\lambda _1+\mu )v_1+(1-\lambda _1-\mu ){\tilde{v}}\in \varPhi (x),\\ 0\ne w_2= & {} (\lambda _1-\mu )v_1+(1-\lambda _1+\mu ){\tilde{v}}\in \varPhi (x), \end{aligned}

with $$w_2=-w_1$$, in contradiction with the fact that $$\varPhi (x)$$ does not contain opposite vectors. Hence, $$0\notin \varPhi (x)$$ for every $$x\in \partial C$$.

Since $$\varPhi$$ is upper semicontinuous and thus has a closed graph, we can set

\begin{aligned} \delta _0:={{\mathrm{dist}}}(\partial C \times \{0\}, \mathop {{{\mathrm{graph}}}}\limits _{{\partial C}} \varPhi )>0. \end{aligned}
(12)

Furthermore, we note that $$\varPhi (\partial C)\subseteq {\mathcal {B}}[0,1]$$, for the convexity of the Euclidean distance, and so $$\varPhi (\partial C)$$ is compact.

Step 2 Since $$0\notin f(F)$$, we can define $$f_1:F\rightarrow {\mathcal {S}}^{N-1}\subseteq \mathbb {R}^N$$ as

\begin{aligned} f_1(x)=\frac{f(x)}{\left||f(x)\right||}. \end{aligned}

The function $$f_1$$ is continuous and the hypothesis $$f(x)\notin {\mathcal {N}}_C(x)$$ is equivalent to $$f_1(x)\notin {{\varvec{\nu }}}_C(x)$$, from which it follows that $$f_1(x)\notin \varPhi (x)$$, for every $$x\in F$$. Thus we can define

\begin{aligned} \delta _1:={{\mathrm{dist}}}(\mathop {{{\mathrm{graph}}}}\limits _{{F}} f_1, \mathop {{{\mathrm{graph}}}}\limits _{{\partial C}} \varPhi )>0. \end{aligned}
(13)

We remark that we are considering the distance in $$\mathbb {R}^N\times \mathbb {R}^N$$ between two compact sets corresponding to the graphs of two functions with different domains.

By [3, Sect. 1.13,Theorem 1] (cf. also [15]), there exists a sequence of upper semicontinuous set-valued maps $$\varPhi _i$$, from $$\partial C$$ to $$\mathbb {R}^N$$, satisfying

1. (S1)

for every $$i\in \mathbb {N}$$, $$\varPhi _i$$ has a continuous selection $$\alpha _i$$ ;

2. (S2)

for every $$i\in \mathbb {N}$$, $$\varPhi _i$$ has closed graph and compact values;

3. (S3)

for every $$x\in \partial C$$,

\begin{aligned} \varPhi (x)\subseteq \cdots \subseteq \varPhi _{i+1}(x)\subseteq \varPhi _i(x) \subseteq \cdots \subseteq \varPhi _0(x), \end{aligned}

and

\begin{aligned} \varPhi (x)=\displaystyle \bigcap _{i\in \mathbb {N}} \varPhi _i(x). \end{aligned}

Moreover, since $$\varPhi (\partial C)$$ is compact, the maps $$\varPhi _i$$ can be taken with convex values.

Let us introduce the set-valued maps $${{\varvec{\nu }}}_i$$ from $$\partial C$$ to $$\mathbb {R}^N$$ as

\begin{aligned} {{\varvec{\nu }}}_i (x)=\left\{ \frac{y}{\left||y\right||}: y\in \varPhi _i(x){\setminus }\{0\} \right\} . \end{aligned}

Note that the maps $${{\varvec{\nu }}}_i$$ have compact graph. Moreover, for every $$x\in \partial C$$,

\begin{aligned} {{\varvec{\nu }}}_{i+1}(x)\subseteq {{\varvec{\nu }}}_i(x), \quad \text {and} \quad \bigcap _{i\in \mathbb {N}}{{\varvec{\nu }}}_i(x)={{\varvec{\nu }}}_C(x). \end{aligned}

From this and the continuity of the distance, we get that there exists an index $$i'\in \mathbb {N}$$ such that, for every $$i\ge i'$$,

\begin{aligned} {{\mathrm{dist}}}(\mathop {{{\mathrm{graph}}}}\limits _{{F}} f_1, \mathop {{{\mathrm{graph}}}}\limits _{{\partial C}} {{\varvec{\nu }}}_i)>\frac{\delta _1}{2}, \end{aligned}
(14)

where $$\delta _1$$ has been defined in (13). Similarly, from (12), we get that there exists $${\bar{\imath }}\ge i'$$ such that $$0\notin \varPhi _i(\partial C)$$, for every $$i\ge {\bar{\imath }}$$.

Step 3 We claim that, for any $$j\ge {\bar{\imath }}$$, the choice

\begin{aligned} \widehat{{\mathcal {N}}}_C(x)={{\mathrm{cone}}}[\varPhi _j(x)] \end{aligned}

satisfies all the requirements (N1), (N2) and (N3). First of all, we notice that the cone generated by a compact, convex set is always closed and convex. Similarly, since the graph of $$\varPhi _j$$ is compact, it follows that the graph of $$\widehat{{\mathcal {N}}}_C$$ is closed. Furthermore, since $${{\varvec{\nu }}}_C(x)\subseteq \varPhi (x)\subseteq \varPhi _j(x) \subseteq \widehat{{\mathcal {N}}}_C(x)$$, it follows that $${\mathcal {N}}_C(x)\subseteq \widehat{{\mathcal {N}}}_C(x)$$.

Now let us suppose by contradiction that, for some $$x\in \partial C$$, there exists $$v\in \widehat{{\mathcal {N}}}_C(x){\setminus }\{0\}$$ such that $$-v\in \widehat{{\mathcal {N}}}_C(x)$$. Then there exist $$v_1=a_1v$$ and $$v_2=-a_2v$$, with $$a_1>0$$, $$a_2>0$$, such that both $$v_1\in \varPhi _j(x)$$ and $$v_2\in \varPhi _j(x)$$. Since $$\varPhi _j(x)$$ is convex, it follows that

\begin{aligned} 0=\frac{a_2}{a_1+a_2}v_1 + \frac{a_1}{a_1+a_2}v_2 \in \varPhi _j(x), \end{aligned}

a contradiction, since $$j\ge {\bar{\imath }}$$. Hence, (N1) is satisfied.

To satisfy (N2), it is sufficient to take $$\alpha =\alpha _j$$, where $$\alpha _j$$ is a continuous selection of $$\varPhi _j$$ given by (S1). Since $$0\notin \varPhi _j(x)$$ for every $$x\in \partial C$$, we have that $$\alpha _j(x)\ne 0$$ for every $$x\in \partial C$$.

Let us now define $${\varvec{{\hat{\nu }}}}_C (x)={{\varvec{\nu }}}_j(x)$$, for a fixed $$j\ge {\bar{\imath }}$$. Then, from (14), we have the estimate

\begin{aligned} {{\mathrm{dist}}}(\mathop {{{\mathrm{graph}}}}\limits _{F} f_1, \mathop {{{\mathrm{graph}}}}\limits _{{\partial C}} {\varvec{{\hat{\nu }}}}_C)>\frac{\delta _1}{2}, \end{aligned}

from which (N3) follows straightforwardly.

Step 4 Now we are ready to construct the sought prolongation $${\hat{f}}$$. Let us pick any $$0<\delta <\delta _1/2$$. We define $$F_\delta =\partial C \cap {\mathcal {B}}(F,\delta )$$ and introduce the function $$f_2:F_\delta \rightarrow {\mathcal {S}}^{N-1}\subseteq \mathbb {R}^N$$ as

\begin{aligned} f_2(x)=f_1(\pi _F(x))=\frac{f(\pi _F(x))}{\left||f(\pi _F(x))\right||}. \end{aligned}

For every $$x\in F_\delta$$, we have

\begin{aligned} {{\mathrm{dist}}}\bigl ((x,f_2(x)),\mathop {{{\mathrm{graph}}}}\limits _{{F}} f_1\bigr )\le {{\mathrm{dist}}}\bigl ((x,f_2(x)),(\pi _F(x),f_2(x))\bigr )\le \delta . \end{aligned}

Using the triangle inequality, this implies

\begin{aligned} {{\mathrm{dist}}}&((x,f_2(x)),\mathop {{{\mathrm{graph}}}}\limits _{{\partial C}} {\varvec{{\hat{\nu }}}}_C) \ge \\&\ge {{\mathrm{dist}}}(\mathop {{{\mathrm{graph}}}}\limits _{{F}} f_1, \mathop {{{\mathrm{graph}}}}\limits _{{\partial C}} {\varvec{{\hat{\nu }}}}_C) - {{\mathrm{dist}}}((x,f_2(x)),\mathop {{{\mathrm{graph}}}}\limits _{{F}} f_1) \nonumber \\&\ge \frac{\delta _1}{2} -\delta >0, \end{aligned}

and so

\begin{aligned} {{\mathrm{dist}}}(\mathop {{{\mathrm{graph}}}}\limits _{{F_\delta }} f_2,\mathop {{{\mathrm{graph}}}}\limits _{{\partial C}} {\varvec{{\hat{\nu }}}}_C)\ge \frac{\delta _1}{2} -\delta >0, \end{aligned}

from which it follows that $$f_2(x)\notin {\varvec{{\hat{\nu }}}}_C (x)$$, for every $$x\in F_\delta$$, and consequently $$f(\pi _F(x))\notin \widehat{{\mathcal {N}}}_C(x)$$.

Writing $$\lambda _x={{\mathrm{dist}}}(x,F)/\delta$$, we now set

\begin{aligned} {\hat{f}}(x)= {\left\{ \begin{array}{ll} f(x), &{}\text {if }x\in D,\\ (1-\lambda _x) f(\pi _F(x)) -\lambda _x \alpha (x), &{}\text {if }x\in F_\delta {\setminus } F, \\ -\alpha (x), &{}\text {if }x\in \partial C{\setminus } F_\delta , \end{array}\right. } \end{aligned}

where $$\alpha :\partial C \rightarrow \mathbb {R}^N$$ is the continuous selection provided by (N2). We thus have a continuous function defined on $$D\cup \partial C$$. If we prove that $${\hat{f}}$$ satisfies the desired property on $$\partial C$$, then the proof is completed, since we can apply Tietze’s theorem to get a continuous extension $${\hat{f}}:E\rightarrow \mathbb {R}^N$$. What we are actually going to show now is that

\begin{aligned} {\hat{f}}(x)\notin \widehat{{\mathcal {N}}}_C(x), \quad \hbox { for every }x\in \partial C. \end{aligned}

We already know by (N3) that $$\hat{f}(x)\notin \widehat{{\mathcal {N}}}_C(x)$$, for every $$x\in F$$. On the other hand, if $$x\in \partial C{\setminus } F_\delta$$, it is sufficient to combine (N1) and (N2). Let us now take $$x\in F_\delta {\setminus } F$$ and assume by contradiction that $${\hat{f}}(x)\in \widehat{{\mathcal {N}}}_C(x)$$. Then, since $$\widehat{{\mathcal {N}}}_C(x)$$ is a convex cone and $$\alpha (x)\in \widehat{{\mathcal {N}}}_C(x)$$,

\begin{aligned} f(\pi _F(x))= \frac{1}{1-\lambda _x} {\hat{f}}(x)+ \frac{\lambda _x}{1-\lambda _x}\alpha (x)\in \widehat{{\mathcal {N}}}_C(x), \end{aligned}

a contradiction since $$f(\pi _F(x))\notin \widehat{{\mathcal {N}}}_C(x)$$ for every $$x\in F_\delta$$. The lemma is thus proved. $$\square$$

We can now proceed to complete the proof of our theorem.

Let $$E=D\cup C_1 \cup \dots \cup C_M$$ be an optimal reconstruction of the truncated convex body D. Applying iteratively Lemma 24 to each single partial reconstruction $$C_i$$, we obtain a continuous extension $${\hat{f}}:E\rightarrow \mathbb {R}^N$$ such that

\begin{aligned} {\hat{f}}(x)\notin {\mathcal {N}}_{C_i}(x), \quad \text {for every }x\in \partial C_i, \end{aligned}

for $$i=1,\ldots , M$$, and hence also

\begin{aligned} {\hat{f}}(x)\notin {\mathcal {N}}_E(x), \quad \text {for every }x\in \partial E. \end{aligned}

Thus, by Theorem 3, we have that

\begin{aligned} \deg ({\hat{f}}, E)=\deg ({\hat{f}}, C_1)=\dots =\deg ({\hat{f}}, C_M)=(-1)^N. \end{aligned}

By the additivity property of the topological degree, we have

\begin{aligned} \deg ({\hat{f}},D)=\deg ({\hat{f}}, E) - \sum _{i=1}^M \deg ({\hat{f}}, C_i)=(-1)^N(1-M). \end{aligned}

Since f coincides with $${\hat{f}}$$ on D, the theorem is proved.