Approximation of curve-based sleeve functions in high dimensions

Sleeve functions are generalizations of the well-established ridge functions that play a major role in the theory of partial differential equation, medical imaging, statistics, and neural networks. Where ridge functions are non-linear, univariate functions of the distance to hyperplanes, sleeve functions are based on the squared distance to lower-dimensional manifolds. The present work is a first step to study general sleeve functions by starting with sleeve functions based on finite-length curves. To capture these curve-based sleeve functions, we propose and study a two-step method, where first the outer univariate function—the profile—is recovered, and second, the underlying curve is represented by a polygonal chain. Introducing a concept of well-separation, we ensure that the proposed method always terminates and approximates the true sleeve function with a certain quality. Investigating the local geometry, we study an inexact version of our method and show its success under certain conditions.


I
The capturing or approximation of multivariate functions is nowadays one of the key elements to tackle a great number of scienti c and real-world problems.A complicated function is here usually replaced by a su cient simple function to nd the numerical solution of the problem or to speed up the required numerical computations.Especially, if the domain becomes high dimensional, the approximation should rely on relatively few given function values.Unfortunately, the approximation of highly multivariate functions is hampered by the so-called curse of dimensionality [ ; ; ].One of the most impressive results given by N & W [ ] is here the intractability of the uniform approximation of even smooth functions.
In many applications, the considered functions however possess speci c low-dimensional structures that allow to overcome the curse of dimensionality.One popular approach assumes that the function of interest looks like a ridge.Mathematically, a ridge function  : R  → R possesses the form  ()  (𝐴𝑥) with  ∈ R ℓ× , where ℓ .This functions are especially constant along the kernel of .The function  is usually called the ridge pro le whereas  is the ridge matrix.In the extreme case ℓ = 1, the ridge function becomes  () ( ,  ) with  ∈ R  .
Although this (vector-based) ridge functions are constant along a ( − 1)-dimensional subspace perpendicular to , there are numerous applications showing their usefulness.
For instance, they appear as plane waves in the theory of partial di erential equations [ ], play a major role in computed tomography [ ], occur in statistical regression theory [ ; ], and provide the basis to analyse neural networks [ ; ; ; ; ; ].Further, the approximation of a multivariate function by a sum of vector-based ridge functions is a topic on its own and has been studied in [ ; ; ; ; ; ; ].The approximation properties of matrix-based ridge functions have been considered in [ ].
Due to their usefulness and approximation properties, the question arises how to learn the ridge pro le and the ridge matrix/vector from certain function evaluations.A rst step into this direction have been done by D V , P & W [ ], who study the approximation of functions  : R  → R only depending on a small set of active variables, i.e.  ( 1 , . . .,   ) = (  1 , . . .,   ℓ ) with ℓ ,-ridge function, whose ridge matrix consists of only unit vectors.For this, D V , P & W establish an algorithm to nd the active variables and approximate the pro le.
For vector-based ridge functions, a rst recovery algorithm using function queries has been proposed by C et al. [ ], where rst the ridge pro le is approximated, and afterwards the ridge vector with non-negative entries is recovered using compressed sensing techniques.Overall, the algorithm performs comparable to the approximation of univariate, continuous functions with respect to the approximation rate.Another approach to tackle the recovery problem is to exploit the directional derivatives ∇ (),  = ( ,  ) ,  or, more precisely, to approximate them by nite di erences.Using them, F , S & V [ ] study vector-based ridge functions on the ball.Their results have then be extended to the cube by K & V [ ].In order to apply the tools from compress sensing, the previous works assume that the ridge vector is sparse or nearly sparse.The algorithm proposed by T & C [ ] uses techniques form low-rank matrix recovery-more precisely, the D selector-to overcome the compressibility assumption.Moreover, the recovery of vector-based ridge functions has been studied by M , U & V [ ], who show that the derivative of the ridge pro le has to be bounded from below away from zero and that this condition is necessary in order to reduce the sampling complexity.
Essentially, a vector-based ridge function may be interpreted as a function of the distance to a ( − 1)-dimensional subspace, i.e.  () = ( ,  ) = (dist(, A)) with A { :  ⊥ }; so one possible generalization is to replace the subspace by another set.Since the level sets of the resulting  are now in ated versions of A and do not look like a ridge anymore, this type of functions has been called sleeve functions by K [ ].More precisely, K de ned a sleeve function on the basis of the squared distance, i.e.
where A is a low-dimensional manifold in R  like an ℓ-dimensional subspace  with ℓ < .For the later case, the gradient of  becomes perpendicular to the underlying subspace  in analogy of the gradient of a vector-based ridge function.This implies that the tangent plane ∇ () ⊥ contains .On the basis of this observation K derived an adaptive algorithm to recover  by approximating the gradient ∇ at random points by nite di erences.An alternative approach to learn  is to solve an optimization problem over the G ian [ ].

O C
The focus of the present work is to study sleeve functions that are based on non-linear underlying structures.More precisely, we are interested in learning sleeve functions based on J arcs, which are injective mappings  : [0, ℓ ()] → R  , where ℓ is the length of the arc.J arcs are thus non-self-intersecting, nitelength curves.We require that our underlying J arc is at least twice continuously di erentiable-henceforth called (J )  2 -arc or  2 -curve.Formally, the corresponding sleeve function is de ned as follows.
Our main interest is to capture the sleeve pro le  and the underlying curve  numerically from point and gradient queries, where the gradient may also be approximated by nite di erences.To analyse the proposed algorithm, we require that the pro le is bounded from below away from zero at the origin and that the J curve remains non-selfintersecting if it is in ated to some extend.Our central contributions are the following: • We propose an adaptive, two-step learning algorithm to approximate the pro le and to recover the underlying curve on the basis of projections that are computed form function and gradient queries.The algorithm always terminate and captures the underlying curve and pro le up to given approximation errors.
• We analyse the e ect of inaccurately computed projections to the proposed method recovering the underlying curve and show that the additional error can be controlled under suitable assumptions.
• We derive a uniform bound of the approximation error for the composed two steps of the proposed method.

R
After starting with some preliminaries in Section , we introduce the concept of well-separated J curves enforcing an extended non-self-intersecting property and an bounded curvature.The consequences regarding projection and distance to a curve are studied in Section .Using the well-separation, we calculate the maximal approximation error caused by replacing an arc by a polygonal chain, see Section .In Section , we derive the second step of our capturing algorithm, which recovers the underlying curve by computing projections from function and gradient queries, and study the in uence of numerical errors during the projection.Capturing the sleeve pro le, which builds the rst step of the method, we propose an adaptive learning algorithm and bound the corresponding approximation error in Section .We conclude with some numerical experiments in Section showing that the method can be implemented and considering some special cases.
The distance of a point  to  is now de ned by dist(, ) dist(, ran ), where ran denotes the image of  in R  , and where dist(, ) inf  ∈  − for general  ⊂ R  and the E ean norm.Based on the distance, the projection onto  is de ned as the set-valued map proj  () Because of the closedness of ran , the distance is always attained; so the projection is never empty.For two curves  and γ, we de ne the distance between them as the H distance of their images, i.e.
For two points ,  ∈ R  , we denote by − →  = () → the vector from  to , i.e. () → =  −  where  and  are interpreted as vectors themselves.Likewise, we de ne the distance and projection of a point to a curve as above.The E ean distance between  and  is just given by dist(, ) () → .The linear line segment between ,  ∈ R  is denoted by [𝑃𝑄].For points on a curve, i.e.  =  () and  =  (), the arc of  between them is denoted by Besides the E ean norm • , we use the C norm • ∞ for vectors and the F norm • F for matrices.The unit vectors are denoted by   with  = 1, . . ., , and the all-ones vector by 1.We refer to the identity matrix as  .For the ball with radius  around , we write   () and, for the open ball, B ().The non-negative and non-positive real numbers are denoted by R + and R − respectively-both contain zero.The cone of a set  ⊂ R  is de ned as cone  { :  ∈ ,  ≥ 0}.Finally, we denote by   (,  ) the  -times continuously di erentiable mappings from  to  .If  and  are obvious, we may write   ( ) or even   .

W S J C
Since our approximation of the unknown curve  will be based on projections, we have to ensure that these are well de ned in a neighbourhood of .For this, the J curve is not allowed to intersect itself even if it is in ated to some extend.To express this assumption mathematical, we use tangential and normal cones.For the image of a  2 -curve, the tangential cone [ : Def .] becomes so   () is usually spanned by the tangent and is the ray of the left-hand or right-hand derivative at the end points.The (regular) normal cone [ : Prop .] is now given by   ()  : ,  ≤ 0 for all  ∈   ().; so   () is usually the hyperplane orthogonal to the tangent  () and a half-space at the end points.
Figuratively, the well-separation means that  is not allowed to intersect with an in ated circle around the curve.Henceforth, we call the in ated circle the normal ring of  at  ∈ (0, ℓ ()).In three dimensions, the normal ring looks like a threaded horn torus or water wing.At the end points, the normal ring is closed with a half-ball.Up to a local neighbourhood, the points of  are thus well-separated by a distance of 2 at the least.Further, the curvature of  is bounded by 1 / since the osculating circle would then be included in the boundary of the normal ring.
The well-separation ensures the single-valueness of the projection within a certain neighbourhood of the curve.

T . (S V P
).Let the J  2 -arc  : [0, ℓ ()] → R  be -separated, and let  ∈ R  be a point with dist(, ) < .The projection proj  () is then single-valued and thus unique in an open neighbourhood of .
Proof.If  =  () is contained in proj  (), then  −  has to be in the normal cone   (), see [ : Ex . ]. Therefore, the ball     () with radius   −  <  is contained in     ( + (− ) / − ), which is the closure of a ball in the normal ring.Since the interior of   contains no points of , and since   touches ∂  at exactly one point, the intersection   ∩ ran  consists only of the point  showing that the projection is single-valued.The same argumentation holds for all points in B () with More general, if  ∈ R  is no ambiguity point and does not lie on , we always nd a small neighbourhood where the projection is single-valued too.Moreover, the set of ambiguity points is a set of measure zero.To study the set of ambiguity points, we use that the projection and distance onto and to a J  2 -curve is di erentiable for most unambiguity points.Both results can be found in [ ], and we state them for our speci c setting with  2 -curves.

T . (D -H [ : T . ]).
Let  be a J  2 -arc, and let  ∈ R  be a point within an open neighbourhood where the projection is single-valued.If proj  () in an inner point, then proj  is di erentiable at .
The restriction to a point that is projected to an inner point is here crucial since the projection becomes undi erentiable at the end points.
Counterexample .(End Points).Consider the curve or line segment  () (, 0) T with The distance to a curve is obviously not di erentiable for points on the curve.In di erence to the projection, the distance is di erentiable for points projected onto the end points.

T . (D -H , [ : P . ]).
Let  be a J  2 -arc, and let  ∈ R  be a point within an open neighbourhood where the projection is single-valued.If  ∉ ran , then dist  is di erentiable at  with .
Proof.For points  projected to inner points, then statement has been established in [ ].If  is in a neighbourhood projected to one end point, the statement is clear since the distance becomes the E ean distance to the end point.The interesting points are thus   (0) +  with  ⊥  (0+) and proj  () =  (0).On one side of the a ne hyperplane corresponding to  (0) the points are projected to an inner point and on the other side to the end point.The corresponding derivatives are .
For  →  in both half-spaces, the derivatives converges to the same value since the projection is continuous.
The ambiguity points with respect to a J  2 -curve have a benign structure.The restriction of all the points with exactly two projections has L measure zero.
Proof.Let  be an ambiguity point in  2 with projection  1 and  2 .Since the distance function is continuous, we nd a small open neighbourhood   such that dist(, ) is attained by a curve point near  1 and/or  2 , i.e. dist(, ) = min{dist(,  1 ), dist(,  2 )}, where  1 and  2 are small arcs around  1 and  2 .Further   may be chosen small enough such that the projection to a single arc  1 or  2 is single-valued in   such that proj  1 and proj  2 become continuously di erentiable by Theorem . .The ambiguity points in   are the zeros of the function Because the gradient

Figure :
The projections of the ambiguity point  onto the curve  lie on a circle.
Changing the radius and moving the circle around, we find an arbitrary close point with exactly two projection.
maximal approximation error

Figure :
The two circles correspond to the worst-case normal rings at  that touch  and vice versa.The maximal approximation error may be a ained in the middle of [𝑃𝑄].

A P C
If the curvature of a planar curve  is bounded by  max , then the approximation error between an arc  [ ] and the line segment Since we however have no information about the current arc  [ ] living in R  , we use a more geometric consideration to estimate the maximal approximation error.

T . (A E
).Let  : [0, ℓ ()] → R  be a -separated J  2 -arc, and let  and  be distinct points on  such that ℎ dist(, ) < 2.The maximal approximation error is then bounded by Proof.If the end points  and  of  [ ] are inaccurate, the error analysis may be adapted by enlarging the area that contains the true arc.

C . (A E
).Let  : [0, ℓ ()] → R  be an -separated J  2 -arc, and let P, Q with dist( P, ) ≤ , dist( Q, ) ≤ , and ℎ dist( P, Q) < 2( − ) be approximations of the curve points , .The maximal approximation error of the arc  [ ] is bounded by Proof.Extend the line [ P Q] by a segment of length  on both ends and move the constructed circular segments in every two-dimensional cut away from the approximation line by  to cover the true end points of the arc  [𝑃𝑄 ] .
The above estimate only considers the distance between the curve points and the maximal curvature encoded in the well-separation.It is possible to improve the bounds using tangents at  and  additionally.In doing so, the main improvement can been seen for the approximation of long arcs  [𝑃𝑄 ] .To control the H approximation error numerically without knowing the curve itself, the length of the line segments has to be rather small such that the angle between the tangents and the direction of the polygonal line segment is negligible.

R U J C
The identi cation of a curve-based sleeve function consists of two central part.One the one side, we have the approximate the unknown structure function  and, on the other side, the underlying curve .Assume for this section that the di erentiable structure function  : [0, ∞] → R is strictly monotonically increasing with (0) = 0 and  0 and is known in advance.For any point  ∈ ran , the sleeve function  thus becomes zero.Otherwise, if  is unambiguous, Theorem .ensures that  is di erentiable with gradient so the negative gradient points directly to proj  ().Moreover, the distance to the curve is encoded in the function value  () and may be determined by inverting the strictly monotone .Together, this allow us to compute the projection to the unknown curve by evaluating  and ∇ .
On the basis of this projection, we approximate the underlying curve by a polygonal chain.If [] is the last line segment, we consider  +  () → , which may be seen as extension of the segment in direction of , and project this point back to  by Algorithm . .If the step size  is chosen appropriately, the distance of the new point to  can be controlled from below and from above.

T . (S S G
).Let  : [0, ℓ ()] → R  be a -separated Proof.Due to the restriction  < , the parameter  * is bounded by 3 /4 ; so the distance between the point R  +  *  and  is  at the most, which ensures that the projection onto  is unique, and that  is hence well de ned.Next, we study the maximal possible projection distance   dist(  , R ) with   proj  ( R ) and R  +  for arbitrary step sizes .Because of the -separation, the normal ring around  is not allowed to For a xed , the distance between   and  is thus bounded by The step size  * in the assertion is the unique solution of which ensures that the upper bound becomes exactly .
Inserting  * into the lower bound, we obtain Note that the lower bound is monotonically decreasing with respect to ℎ because the path of  is less restricted for larger ℎ resulting in an increasing maximal projection distance   ; so the lower bound is limited by Rearranging the argument of the square root, we have Considering the square of the remaining term yield Applying the mean value theorem to the square root, and exploiting  2 >  1 and  > , we nally obtain The calculated optimal step size  * depends on ℎ dist(, ).Figuratively, if the last step has been small, the region where  runs and the related uncertainty becomes smaller resulting in a greater step size and vice versa.The guaranteed minimal and maximal step size allow us to move along the unknown curve without getting stuck.If the underlying curve is well separated, we can, moreover, control the approximation error for the obtained polygonal chain.

A
. (U J C A ).I :  > 0,  ∈ (0, ),  0 ∈ R  with ∇ ( 0 ) is well de ned.I : ] with dist(γ, ) ≤ .This algorithm starts somewhere on the underlying curve and iteratively moves in both direction until the end point of the curve is reached.Due to the guaranteed length of the appended line segments, the procedure ends after nitely many steps.
Proof.The chosen step size  min{, 2( 2 − ( − ) 2 ) 1 /2 } ensures that all projections during the algorithm are well de ned and single-valued by Theorem .; so Algorithm .can be executed.The maximal step size guarantee in Theorem .further yields dist(  ,  +1 ) ≤  resulting in dist( [   +1 ] , [   +1 ]) ≤  for all line segments by Theorem . .Since the length of the line segments [   +1 ] is bounded from below by Theorem ., and since  has a nite length, the iterations ( ) and ( ) terminate as soon as the end point is reached, which happens in nitely many steps.
If the required projections become inexact, in the worst case, we are maybe not able to recover the underlying J curve.For this reason, we now study the caused errors and instabilities in more detail.The central idea is to adapt the calculations in the proof of Theorem ., where the maximal possible projection distance of the point R  +  with  () → /dist(, ) has been computed.If the projections are inexact, then we can merely determine the points  and  in Figure up to a small neighbourhood-say up to an -ball; so instead of using -balls touching  and  to guarantee the minimal and maximal step size, we consider -balls intersecting   () and   ().To simplify the calculations, we restrict ourselves to a speci c set of balls that results form the following construction, see Figure : (i) take some -ball intersecting   () and   (), and consider the plane through , , and the centre of the ball; (ii) denoting the intersection points of the -ball with the line through  + 1 and  − 1 by P and Q; (iii) rotate the ball around P such that intersection Q becomes  − 1, which enlarges the maximal projection distance; and (iv) rotate the ball around the new Q and move P such that dist( P, Q) becomes ℎ + 2 with ℎ dist(, ).
Without loss of generality, we have assumed that the centre is located below  and  within the two-dimensional cut.Analogous rotations can be applied if the centre lies above  and .The rotation (iv) only enlarges the maximal projection distance corresponding to the original -ball if  is large compared to ℎ.A brief computation now yields For greater ,  and smaller , the disc of possible projection of   touches the -ball always on right-hand side of Q during the rotation; so the maximal projection radius is enlarged.
After rotation a single -ball, the scenery becomes like in Figure .Considering the union of all rotated balls, which is rotationally symmetric around the line through  and , we see that every two-dimensional cut has this geometry.Instead of deriving an analysis similar to where tan  = /( ℎ /2 − ).Preliminary, we establish the following two estimates: . the F norm distance between rotation and identity is given by .  and T lie on the circle of radius ; since √ 10 (ℎ+2)/2 ≤  implies ℎ /2+ ≤ √ 2 /2 , the vertical distance is less than the horizontal distance; we thus infer Exploiting these two bounds, we nally obtain Incorporating the errors  ℎ () and  into the minimal and maximal step size derived in Theorem . .Here  * is lessen to guarantee dist(, ) ≤ η without uncertainties.Adding them, we have dist(, ) ≤  at the most.The minimal step size is adapted similarly.
The essential statement behind Lemma . is that the geometry exploited to establish minimal and maximal step sizes does only slightly change if the projection to obtain  and  becomes inaccurate.Based on the projection error  > 0, we de ne The lower bound of the inexact step size guarantee depends on the current ℎ; so the lower bound may become arbitrary small by iterating the result.Proposing an additional assumption, we may nevertheless guarantee that a slight modi cation of Algorithm .recovers the underlying J curve up to a certain H distance even if the projection to the curve cannot be performed exactly.

T . (T ).
Let  : [0, ℓ ()] → R  be a -separated J  2 -arc, and let  ∈ [0,  /4] be the maximal numerical error of proj  .Adapting Algorithm .by choosing where we exploit  ≥ 4 and again the last assumption.The existence of an appropriate h > 0 is thus guaranteed.
Remark . .Due to the assumptions, the maximal step size  is always positive.If  becomes small, then the additional assumption in Theorem .are always satis able.The other way round, for a given  and , the theorem enable us to calculate an upper bound for the maximal projection error  that guarantees the succuss of Algorithm .by solving a simple quadratic equation.

I S F
Besides the underlying J curve, the structure function  has to be approximated too.Having a non-curve point  and its projection  proj  (), we have immediate access to  via for  ≥ 0 until an ambiguity point is hit.Since  is a nite-length J arc, we henceforth assume ran  ⊂ 1 /2 for simplicity.Other domains may be considered analogously.Restricting our interest to an approximation of  on 1 /2 , we only have to determine  on the interval [0, 1].We have here two possibilities: either  is approximated directly or  ↦ →  2 () ( 2 ) is approximated.We choose the second approach because  2 immediately represent the slopes of the sleeve function.
For simplicity, we approximate  2 by a linear spline with equispaced knots.Bene cially, this simpli es the inversion in Algorithm ., where we can take  −1 2 () instead of the step size √ .The approximation of  2 with step size  > 0 may be incorporated into Algorithm .for a -separated J curve in the following manner: .From the start point  0 , the structure function  2 is sampled equispaced in direction ∇ ( 0 ) until the curve or an ambiguity point is hit. .This especially gives an approximation of  2 on [0, ] such that Algorithm .can be performed. .Determine the point  =  () with the largest norm.Note that  ⊥  (), that proj  ((1 + )) =  for  ≥ 0, and that the ray {(1 + ) :  ≥ 0} cannot contain any ambiguity point; so the approximation of  2 may be extended onto [0, 1] by sampling in this direction.
A Remark . .Note that the collected samples gives only rough informations about the root of  ( 0 −), which is located somewhere around  ( −1).To improve the approximation of  2 in ( ), which is crucial to compute proj  by Algorithm ., we therefore apply the N -R method.During the numerical experiments, we usually require only a few iteration to nd the root with su cient accuracy.
Due to the approximation of  2 , the projection computed by Algorithm .becomes inexact since the true step size  −1 2 () is not available.Depending on the local derivative of  2 the approximation error is here ampli ed or reduced; therefore, we again assume that the projection error is again bounded by an appropriate  > 0. If the rst and second derivative of the true structure function  are bounded, and if the step size  is chosen appropriately, then the total approximation error for the sleeve function  may be controlled.

N E
Besides the theoretical guarantees for the approximation of curve-based sleeve functions by polygonal chains and linear splines, we next present several examples to show that the established concepts may be carried out numerically.All algorithms and experiments have been implemented in Julia .
Oddly enough, the main obstacles is here the numerical evaluation of the true sleeve function  () (dist(, ) 2 ) and its derivatives-even if the pro le  and the underlying J curve  are known analytically-since all computations require the projection onto .The projection may be computed by minimizing the function over the interval [0, ℓ ()].For this purpose, we use the well-known N -R method, i.e.  +1   −  (  ) /  (  ), where  is sampled equispaced to nd an appropriate starting value  0 , and where the end points are considered separately.In general, the The Julia Programming Language -Version . .projection of a point onto a parametric curve is a non-trivial problem by itself, see for instance [ ; ; ; ] and references therein.In contrast, the projection to the polygonal chain  is straightforward-project to each line segment and take the global minimizer.The spiral is an 1 /8-separated  ∞ -curve.Since the maximal projection error is unknown, we perform Algorithm .with  0, i.e. we assume that the error caused by the approximation of the true pro le is negligible.For the remaining parameters, we choose   The results of Algorithm .with  0,  10 −2 , and  10 −4 are shown in Figure .Both-pro le and curve-are well estimated.Since the established approximation errors are completely independent of the current dimension, we expect similar results for the two one-dimensional approximation tasks in higher dimensions.
Up to now, we have assumed that we have access to the function values and derivatives of the true sleeve function  .If the derivatives are not available, numerical di erentiation (e) Absolute error using exact derivatives.

Figure :
Sleeve function based on a half-ellipse and an identity profile.Note that the approximated profile  2 is nevertheless non-linear.The required derivatives are approximated by symmetric di erence quotients.The additional error is here marginal compared to the approximation using exact derivatives.
may be used instead.In this numerical example, we approximate the required derivatives using the symmetric di erence quotient [∇ ()]  ≈  ( +   ) −  ( −   ) 2 for  = 1, . . ., , where   denotes the th unit vector.Letting the pro le  be the identity, which nevertheless results in a locally non-linear sleeve function, and applying Algorithm .with the parameter set  0,  10 −3 ,  10 −4 , and  10 −8 , we obtain the results in Figure .The additional error caused by the nite di erences is here negligible.

V
In the last example, we consider again the half-ellipse but here together with the pro le function  :  ↦ →  2 .Di erently form above, the derivative  is here not bounded away from zero on [0, ]; so the required (again exact) derivative ∇ may nearly vanish for small step sizes .The results of the approximation with  0,  10 −2 , and  10 −4 is shown in Figure .Notice that the approximation of the underlying curve is much worse compared with the previous example.If  becomes smaller, the polygonal chain starts to oscillate around the true curve.The projection error caused by the loss of signi cance in Algorithm .becomes clearly visible.From a numerical point, it is crucial that  is bounded from below away from zero although this bound does not occur in the derived approximation guarantees.

Figure :
Figure : Rotating a specific ball of the normal ring intersecting the -balls around  and  to standardize the local scenery.Both rotations (iii) and (iv) enlarge the estimation of the maximal projection distance for R .

Figure : Figure : Figure :
Figure : Worst-case scenario, where the ball with the enlarged maximal projection distance around R would touch the rotated ball in Q.Note that  = ℎ /4,  = , and dist( P, Q) = ℎ + 2.To compute the corresponding minimal radius  min the perpendicular bisector  of P and Q and the line  through Q and R is used.
Figure : Sleeve function based on the A ean spiral and a sine profile.The plots only show the recovered functions since they visually coincide with the true function.
Recovered profile g.

Figure :
Figure : Sleeve function based on a space curve and a tangent profile.The plots only show the recovered functions since they visually coincide with the true function.
If the projection is single-valued, we may interpret proj  () as vector or point unstated, i.e. we may write  = proj  () with  ∈ R  .If the (single-valued) projection can only be computed up to  > 0, we use the notation  ≈  proj  (), where  ≈   means  −  ≤ .Denoting the cardinality of a set by #[•], we call  ambiguous or an ambiguity point if #[proj  ()] > 1 and unambiguous or an unambiguity point otherwise.
is second-countable, already countably many set    cover  2 , whose union is again a L zero set.occur at points, where the charts constructed by the implicit function theorem are glued together.Since  2 is locally a hypersurface, the L measure of the closure remains zero.On the basis of a di erent argumentation, the statement has already by shown by H 1 and  2 are distinct points, D 's implicit function theorem [ : Thm B. ] states that the ambiguity set  2 in an open neighbourhood Ũ ⊂   around  is the realization of a continuously di erentiable map   : R −1 → R  and thus a L zero set by S ' theorem [ ]. Since the E ean R P .(AP).Let  be a J  2 -arc.Then the ambiguity set { ∈ R  : #[proj  ()] > 1} is the closure of  2 .Proof.Since the distance to the curve is continuous, the points in  2 are ambiguous.To show  ⊂  2 , we take an ambiguity point  with #[proj  ()] > 2. In two dimensions, the set proj  () is located on a circle.Since  is not closed, we can either shrink the circle and move it into a gap between to projection points or, if proj  () lie on a half-sphere, we can move the circle outwards and enlarge it, see Figure.In both cases, the center  of the deformed circle is contained in  2 .By controlling the radius, the center may be arbitrary close to .This construction generalizes to R  by changing the radius and moving the sphere containing the projection points in several steps.Figuratively, higher ambiguities with #[proj  ()] > 2 We rst discuss the planar setting, where the normal ring becomes the union of two open discs of radius  for inner points and is extended by an open half-disc of radius 2 at the end points.By the -separation, the normal rings of [ ]are not allowed to cover  or .To estimate the approximation error, let us consider the worst-case normal rings at  that also touch  and vice versa, i.e. the two discs of radius  touching  and .The situation is schematically shown in Figure.The main observation is now that the arc [ ]has to lie in the intersection of the two discs.For this, we notice two things:. cannot cross the boundary of the intersection except at  and  since otherwise the normal ring at the crossing point would cover  or ; . cannot start in the intersection, leave at , go around the intersection, and enter again at  due to the end point condition of the -separability.Since the two circular segment shown in Figure are possible paths for  [ ] , the maximal H approximation error is attained in the middle of [].The P ean theorem now yields the assertion.Similarly in higher dimensions,  [ ] is contained in the intersections of all open balls of radius  touching  and .Possible arcs form  to  with maximal H approximation error are the circular segments of a two-dimensional cut through the intersection containing  and , which looks exactly as in the planar setting.The assertion again follows by the P ean theorem.
) ≤ , or, if the lower bound does not hold,  is an end point of .