A Note on the Hausdorff Distance Between Norm Balls and Their Linear Maps

We consider the problem of computing the (two-sided) Hausdorff distance between the unit ℓp1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\ell _{p_{1}}$\end{document} and ℓp2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\ell _{p_{2}}$\end{document} norm balls in finite dimensional Euclidean space for 1≤p1<p2≤∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$1 \leq p_{1} < p_{2} \leq \infty $\end{document}, and derive a closed-form formula for the same. We also derive a closed-form formula for the Hausdorff distance between the k1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$k_{1}$\end{document} and k2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$k_{2}$\end{document} unit D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$D$\end{document}-norm balls, which are certain polyhedral norm balls in d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$d$\end{document} dimensions for 1≤k1<k2≤d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$1 \leq k_{1} < k_{2} \leq d$\end{document}. When two different ℓp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\ell _{p}$\end{document} norm balls are transformed via a common linear map, we obtain several estimates for the Hausdorff distance between the resulting convex sets. These estimates upper bound the Hausdorff distance or its expectation, depending on whether the linear map is arbitrary or random. We then generalize the developments for the Hausdorff distance between two set-valued integrals obtained by applying a parametric family of linear maps to different ℓp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\ell _{p}$\end{document} unit norm balls, and then taking the Minkowski sums of the resulting sets in a limiting sense. To illustrate an application, we show that the problem of computing the Hausdorff distance between the reach sets of a linear dynamical system with different unit norm ball-valued input uncertainties, reduces to this set-valued integral setting.


Introduction
Given compact X , Y ⊂ R d , the two sided Hausdorff distance δ between them is a mapping δ : X × Y → R ≥0 defined as δ (X , Y) := max sup where ∥ • ∥ 2 is the Euclidean norm with the associated scalar product ⟨•, •⟩.
Denoting the unit 2 norm ball in R d as B d 2 , an equivalent definition of the Hausdorff distance is where ∔ denotes the Minkowski sum.As is well-known [1, p. 60-61], δ ≥ 0 is a metric.The distance was introduced by Hausdorff in 1914 [2, p. 293ff], and can be considered more generally on the set of nonempty closed and bounded subsets of a metric space (M, dist) by replacing the Euclidean distance ∥ • ∥ 2 in (1) with dist(•, •).The Hausdorff distance and the associated topology, have found widespread applications in mathematical economics [3], stochastic geometry [4], set-valued analysis [5], image processing [6] and pattern recognition [7].The distance δ has several useful properties with respect to set operations, see e.g., [ where ⟨•, •⟩ denotes the standard Euclidean inner product, and S d−1 is the unit sphere in R d .The definition (3) can be extended to any closed convex set K in the sense h K = +∞ if and only if K is unbounded [10,Prop. 2.1.3].Geometrically, h K (y) gives the signed distance of the supporting hyperplane of K with outer normal vector y, measured from the origin.The support function h K (y) uniquely determines the set K. Since only the direction of the normal vector y matters, we restrict the domain of the support function on S d−1 instead of R d .Doing so, invites no loss generality because a support function h K (•) is always positive homogeneous of degree one (see e.g., [10, p. 209]).Furthermore, h K (y) is a convex function of y.From (3), we note that for given T ∈ R d ′ ×d and compact convex K ⊂ R d , the support function of the compact convex set For more details on the support function, we refer the readers to [10, Ch.V].
The two-sided Hausdorff distance (1) between a pair of convex compact sets K 1 and K 2 in R d can be expressed in terms of their respective support functions h 1 (•), h 2 (•) as where the absolute value in the objective can be dispensed if one set is included in another 1 .Thus, computing δ leads to an optimization problem over all unit vectors y ∈ S d−1 .
The support function, by definition, is positive homogeneous of degree one.Therefore, the unit sphere constraint ∥y∥ 2 = 1 in (5) admits a lossless relaxation to the unit ball constraint ∥y∥ 2 ≤ 1.Even so, problem (5) is nonconvex because its objective is nonconvex in general.
In this study, we consider computing (5) for the case when the sets K 1 , K 2 are different unit norm balls and more generally, linear maps of such norm balls in an Euclidean space.This can be viewed as quantifying the conservatism in approximating a norm ball by another in terms of the Hausdorff distance.We show that computing the associated Hausdorff distances lead to optimizing the difference between norms over the unit sphere or ellipsoid.While bounds on the difference of norms over the unit cube have been studied before [11], the optimization problems arising here seem new, and the techniques in [11] do not apply in our setting.
Motivating application: A practical motivation for our study comes from control theory and formal verification literature [12][13][14][15][16][17][18][19][20].There, it is of interest to investigate how controlled dynamical systems evolve relative to each other subject to different set-valued input uncertainties.For example, if the controlled dynamical systems model vehicles driving on road, then one practical question is whether the set of states reachable by one vehicle at a specific time, can intersect the other set, possibly resulting in a collision.The different set-valued inputs in the vehicle context, represent respective actuation uncertainties.Then, a natural way to quantify safety or the lack of it, is by computing the distance between such sets in terms of the Hausdorff metric.
In such applications, for computational ease, one often assumes box-valued (i.e., ℓ ∞ norm ball) input uncertainty sets even though the true input uncertainty sets might be ℓ p norm balls for 0 < p < ∞.Such computational approximation in the input uncertainty sets lead to over-approximation of the reach sets [19,Sec. III].Then, quantifying the conservatism in overapproximation amounts to computing the Hausdorff distance between such reach sets.When the controlled dynamical systems are linear, it turns out that the corresponding Hausdorff distance (5) takes the form which is what we investigate in Sec. 4 in this paper.
We also provide an application Example in Sec. 4, where the different reach sets result from the motion of a satellite subject to ℓ 2 and ℓ ∞ norm ballvalued uncertain input sets.In this application, the input components denote the radial and tangential thrusts, and depending on the actuators installed (e.g., gas jets, reaction wheel), two different scenarios may arise: one where there are hard bounds on the magnitude of the thrust components (i.e., ℓ ∞ norm ball), and another in which there is bounded thrust magnitude (i.e., ℓ 2 norm ball).So from an engineering perspective, it is natural to quantify the Hausdorff distance between the reach sets resulting from two different types of actuation uncertainties.
Related works: There have been several works on designing approximation algorithms for computing the Hausdorff distance between convex polygons [21], curves [22], images [6], meshes [23] or point cloud data [24]; see also [25][26][27][28][29].There are relatively few [30] known exact formula for the Hausdorff distance between sets.To the best of the authors' knowledge, analysis of the Hausdorff distance between norm balls and their linear maps as pursued here, did not appear in prior literature.
Contributions: Our specific contributions are as follows.
• We deduce a closed-form formula for the Hausdorff distance between unit ℓ p1 and ℓ p2 norm balls in R d for 1 ≤ p 1 < p 2 ≤ ∞, i.e., a formula for δ B d p1 , B d p2 .We provide details on the landscape of the corresponding nonconvex optimization objective.We also derive closed-form formula between the k 1 and • We derive upper bound for Hausdorff distance between the common linear transforms of the ℓ p and ℓ q norm balls.This upper bound is a scaled 2 → q induced operator norm of the linear map, where 1 ≤ q ≤ ∞ and the scaling depends on both p and q.We point out a class of linear maps for which the aforesaid closed-form formula for the Hausdorff distance is recovered, thereby broadening the applicability of the formula.• Bringing together results from the random matrix theory literature, we provide upper bounds for the expected Hausdorff distance when the linear map is random with independent mean-zero entries for two cases: when the entries have magnitude less than unity, and when the entries are standard Gaussian.• We provide certain generalization of the aforesaid formulation by considering the Hausdorff distance between two set-valued integrals.These integrals represent convex compact sets obtained by applying a parametric family of linear maps to the unit norm balls, and then taking the Minkowski sums of the resulting sets in a suitable limiting sense.We highlight an application for the same in computing the Hausdorff distance between the reach sets of a controlled linear dynamical system with unit norm ball-valued input uncertainties.The organization is as follows.In Sec. 2, we consider the Hausdorff distance between unit norm balls for two cases: ℓ p norm balls for different p, and Dnorm balls parameterized by different parameter k.We discuss the landscape of the corresponding nonconvex optimization problem and derive closed-form formula for the Hausdorff distance.Sec. 3 considers the Hausdorff distance between the common linear transformation of different ℓ p norm balls, and bounds the same when the linear map is either arbitrary or random.In Sec. 4, we consider an integral version of the problem considered in Sec. 3 and illustrate one application in controlled linear dynamical systems with set-valued input uncertainties where this structure appears.These results could be of independent interest.
Notations and preliminaries: Most notations are introduced in situ.We use n := {1, 2, . . ., n} to denote the set of natural numbers from 1 to n. Boldfaced lowercase and boldfaced uppercase letters are used to denote the vectors and matrices, respectively.The symbol E denotes the mathematical expectation, card(•) denotes the cardinality of a set, the superscript ⊤ denotes matrix transpose, and the superscript † denotes the appropriate pseudo-inverse.For a column vector x ∈ R d whose components are differentiable with respect to (w.r.t.) a scalar parameter t, the symbol ẋ denotes componentwise derivative of x w.r.t.t.The notation ⌊•⌋ stands for the floor function that returns the greatest integer less than or equal to its real argument.The function exp(•) with matrix argument denotes the matrix exponential.The inequality ⪰ is to be understood in Löwner sense; e.g., saying S is a symmetric positive semidefinite matrix is equivalent to stating S ⪰ 0. For * is defined to be the support function of its unit norm ball, i.e., The notation above emphasizes that the dual norm is a function of the vector y.For 1 ≤ p ≤ ∞, it is well known that the dual of the ℓ p norm is the ℓ q norm where q is the Hölder conjugate of p.For 1 ≤ p, q ≤ ∞, matrix M ∈ R m×n viewed as a linear map M : where as usual ∥x∥ p := q for p, q finite, ∥ • ∥ ∞ is the sup norm, and (M x) i denotes the ith component of the vector M x.Several special cases of (6) are well known: the case p = q is the standard matrix p norm, the case p = ∞, q = 1 is the Grothendieck problem [31,32] that features prominently in combinatorial optimization, and its generalization p ∈ (1, ∞), q = 1 is the ℓ p Grothendieck problem [33].In our development, the operator norm ∥M ∥ 2→q arises where 1 < q ≤ ∞.

Hausdorff Distance between Unit Norm Balls
We consider the case when in ( 5), the sets Clearly, the Hausdorff distance δ = 0 for p 1 = p 2 , and δ > 0 otherwise.Then the corresponding support functions h 1 (•), h 2 (•) are the respective dual norms, i.e., for 1 ≤ q 2 < q 1 ≤ ∞.By monotonicity of the norm function, we know that ∥ • ∥ q1 ≤ ∥ • ∥ q2 .Therefore, the Hausdorff distance (5) in this case becomes which has a difference of convex (DC) objective.In fact, the objective is nonconvex (the difference of convex functions may or may not be convex in general) because it admits multiple global maximizers and minimizers.The objective in ( 7) is invariant under the plus-minus sign permutations among the components of the unit vector y.There are 2 d such permutations feasible in R d which implies that the landscape of the objective in (7) has 2 d fold symmetry.In other words, the feasible set is subdivided into 2 d sub-domains as per the sign permutations among the components of y, and the"sub-landscapes" for these sub-domains are identical.
We can compute the global maximum value achieved in (7) using the norm inequality which follows from the Hölder's inequality: where the exponents r and r r−1 are Hölder conjugates.Applying this inequality with (8).
In R d , the constant d 1/q2−1/q1 is sharp because the equality in ( 8) is achieved by any vector in {−1, 1} d .Since (7) has constraint ∥y∥ 2 = 1, the corresponding global maximum will be achieved by The scalar ρ is determined by the normalization constraint ∥y max ∥ 2 = 1 as ρ = 1/ √ d.Thus, we obtain where in the last line we substituted ρ = 1/ √ d.The cardinality of Y max equals 2 d , i.e., there are 2 d global maximizers y max ∈ S d−1 achieving the value (9).
We summarize the above in the following Proposition.
where q i denotes the Hölder conjugate of p i for i ∈ {1, 2}.
Remark 1 As the intuition suggests, for a fixed p 1 , larger p 2 results in a larger δ in a given dimension d ≥ 2; see Fig. 1.

Hausdorff Distance between Polyhedral D-Norm Balls
We next show that similar arguments as above can be used to derive the Hausdorff distance between other type of norm balls such as the D-norm balls which are certain polyhedral norm balls.The D-norms and norm balls arise naturally in robust optimization, see e.g., [34, Sec.2.2], [35].The D-norm in R d is parameterized by k, where 1 ≤ k ≤ d, as defined next.
A special case of ( 11) is when the parameter k is restricted to be a natural number, i.e., k ∈ d .Then the D-norm reduces to the so-called k largest magnitude norm, defined next.
where the inequality |x i1 | ≥ |x i2 | ≥ . . .≥ |x i d | denotes the ordering of the magnitudes of the entries in x.
It is easy to verify that (11) (and thus its special case (12)) is indeed a norm, and its dual norm equals [35, Prop.2] For a comparison of the D-norm and its dual with the Euclidean norm, see [35,Prop. 3].We have the following result.
Proof Let h 1 (•), h 2 (•) denote the support functions of K 1 , K 2 , respectively.Using the definition of dual norm, we have Recall that δ relates to h 1 , h 2 via (5).Since 1 Depending on the value of ∥y∥∞, we need to consider three subsets of unit vectors.Specifically, for the unit vectors y satisfying 1 On the other hand, for the unit vectors y satisfying ∥y∥∞ ≤ Finally, for the unit vectors y satisfying 1 Therefore, using (5) we get Using the same arguments as in (9), we obtain sup  Fig. 3 shows the landscape of the objective for computing the Hausdorff distance between the unit D-norm balls with k 1 = 1.7 and k 2 = 2.9 in d = 3 dimensions, and as explained in the proof above, there are eight global maximizers given by v/ √ 3 for all v ∈ {−1, 1} 3 .In this case, the formula ( 13) gives δ = 120 √ 3/493 ≈ 0.421594517055305 while the direct numerical estimate of δ from the contours yields 0.421577951149235.

Composition with a Linear Map
We next consider a generalized version of (7) given by where the matrix T ∈ R m×d , m ≤ d, has full row rank m.Using (4), we can interpret (15) as follows.As before, let p 1 , p 2 denote the Hölder conjugates of q 1 , q 2 , respectively.Then (15) computes the Hausdorff distance between two compact convex sets in R d obtained as the linear transformations of the m-dimensional ℓ p1 and ℓ p2 unit norm balls via T ⊤ ∈ R d×m , i.e., Since the right pseudo-inverse T † = T ⊤ T T ⊤ −1 , one can equivalently view (15) as that of maximizing the difference between the ℓ p1 and ℓ p2 norms over the m-dimensional origin-centered ellipsoid with shape matrix T T ⊤ .
As was the case in (7), problem ( 15) is a DC programming problem with nonconvex objective.However, unlike (7), now there is no obvious symmetry in the objective's landscape that can be leveraged because the number and locations of the local maxima or saddles have sensitive dependence on the matrix parameter T ; see the first column of Table 1.Thus, directly using off-the-shelf solvers such as [36,37] or nonconvex search algorithms become difficult for solving (15) in practice as the iterative search may get stuck in a local stationary point.
Remark 2 We can also consider the Hausdorff distance between the common linear transforms of different polyhedral D norm balls discussed earlier.Specifically, if 4) and the same steps in the proof of Proposition 2, the Hausdorff distance δ between the sets T K 1 , T K 2 equals where the last equality follows from the relation between the induced norm of an operator and that of its adjoint.

Estimates for Arbitrary T
We next provide an upper bound for (15) in terms of the operator norm ∥T ∥ 2→q1 .
Proposition 3 (Upper bound) Let T ∈ R m×d .Then for 1 ≤ q 2 < q 1 ≤ ∞, we have Proof Proceeding as in Sec. 2, for y ∈ S d−1 we get

□
Recall that 1 < q 1 ≤ ∞.When 1 ≤ q 2 < q 1 ≤ 2, the operator norm ∥T ∥ 2→q1 is, in general, NP hard to compute [38][39][40] except in the well-known case q 1 = 2 for which ∥T ∥ 2→2 = σ max (T ), the maximum singular value of T .When 1 ≤ q 2 ≤ 2 < q 1 ≤ ∞, the norm ∥T ∥ 2→q1 is often referred to as hypercontractive [41], and its computation for generic T ∈ R m×d is relatively less explored (see e.g., [41,42]) except for the case q 1 = ∞ for which T 2→∞ = max i=1,...,m ∥T (i, :)∥ 2 (maximum ℓ 2 norm of a row).Hypercontractive norms and related inequalities find applications in establishing rapid mixing of random walks as well as several problems of interest in theoretical computer science [41,[43][44][45].Table 1 reports our numerical experiments to estimate (15) with q 1 = 2, q 2 = 1, for five random realizations of T ∈ R 3×3 , arranged as the rows of Table 1.For visual clarity, the contour plots in the first column of Table 1 depict only four high-magnitude contour levels.These results suggest that the landscape of the nonconvex objective in (15) has sensitive dependence on the mapping T .
We can say more for specific classes of T .For example, notice from ( 17) that if the mapping T : ℓ q (R d ) → ℓ q (R m ) is an isometry, i.e., ∥T y∥ q = ∥y∥ q , then the upper bound is achieved by any y ∈ R d such that √ dy ∈ {−1, 1} d as in Sec. 2, and we recover the exact formula (10).We can characterize these isometric maps as follows.
(i) (See e.g., [46,Remark 3.1]) For q = 2, the mapping T ∈ R m×d is an isometry if and only if T ⊤ T = I d , i.e., T is a column-orthonormal matrix.
The following is an immediate consequence of this characterization.An instance in which ∥T ∥ 2→q1 and hence the bound ( 17) is efficiently computable, occurs when T ∈ R m×d is elementwise nonnegative and 1 ≤ q 1 < 2. In this case, the operator norm ∥T ∥ 2→q1 is known [39,Thm. 3.3] to be equal to the optimal value of the following convex optimization problem: Table 1: Landscapes of ∥T y∥ q2 −∥T y∥ q1 for q 1 = 2, q 2 = 1, y ∈ S 2 in spherical coordinates for five randomly generated T ∈ R 3×3 with independent standard Gaussian entries.The middle column reports the numerically estimated global maxima from the respective contour data, i.e., the estimated Hausdorff distance (15).The last column shows the corresponding bounds (17).
where dg (•) takes a square matrix as its argument and returns the vector comprising of the diagonal entries of that matrix.To see why problem ( 18) is convex, notice that X ⪰ 0 has unique (principal) square root, so ⊤ ⪰ 0 which implies dg T XT ⊤ has nonnegative entries.Consequently, the objective in ( 18) is concave for 1 X ii ≤ 1} is the intersection of the positive semidefinite cone with a linear inequality, hence convex (in fact a spectrahedron).
Then, the right hand side of (17) equals m a numerical solution of (18) via cvx [49] gives OPT ≈ 7.425702405524379.As in Table 1, a direct numerical search over the nonconvex landscape (Fig. 4) for this example returns the estimated Hausdorff distance ≈ 1.888517738190415 while using the numerically computed OPT, we find the upper bound (17) ≈ 1.930096365450782.

Remark 3
We clarify here that for (18) to be used in the upper bound (17), the range of q 1 is 1 < q 1 < 2. That ∥T ∥ 2→q1 equals to (18) holds also for the case q 1 = 1.Indeed, this implies we can compute (16) for elementwise nonnegative T by computing ∥T ⊤ ∥ 2→1 via convex optimization.

Estimates for Random T
For random linear maps T : ℓ q (R d ) → ℓ q (R m ), it is possible to bound the expected Hausdorff distance (15).We collect two such results in the following proposition.
Proposition 6 (Bound for the expected Hausdorff distance) have independent (not necessarily identically distributed) mean-zero entries with |θ ij | ≤ 1 for all index pair (i, j).Then the Hausdorff distance (15) satisfies where the pre-factor Cq 1 depends only on q 1 .(ii) Let T = (θ ij ) m,d i,j=1 have independent standard Gaussian entries.Then the Hausdorff distance (15) satisfies Fig. 4: The landscape of the objective in (15) depicted in spherical coordinates for the problem data given in (19).
where C > 0 is a constant, and γr := (E|X| r ) 1/r , r ≥ 1, is the Lr norm of a standard Gaussian random variable X.In particular, γr ≍ √ r, i.e., there exist positive constants c 1 , c 2 such that c 1 √ r ≤ γr ≤ c 2 √ r for all r ≥ 1.

Integral Version and Application
We now consider a further generalization of (15) given by where for each τ ∈ [0, t], the matrix T (τ ) ∈ R m×d , m ≤ d, is smooth in τ and has full row rank m.
As before, let p 1 , p 2 denote the Hölder conjugates of q 1 , q 2 , respectively.We can interpret (23) as computing the Hausdorff distance between two compact convex sets in R d obtained by first taking linear transformations of the mdimensional p 1 and p 2 unit norm balls via T ⊤ (τ ) ∈ R d×m for fixed τ ∈ [0, t], and then taking respective Minkowski sums for varying τ and finally passing to the limit.In particular, if we let P i := {v ∈ R m | ∥v∥ pi ≤ 1} for i ∈ {1, 2}, then (23) computes the Hausdorff distance between the d dimensional compact convex sets i.e., the sets under consideration are set-valued Aumann integrals [52] and the symbol denotes the Minkowski sum.That the sets in (24) are convex is a consequence of the Lyapunov convexity theorem [53,54].
Notice that in this case, (17) directly yields A different way to deduce (25) is to utilize the definitions (24), and then combine the Hausdorff distance property in [8, Lemma 2.2(ii)] with a limiting argument.This gives For a fixed τ ∈ [0, t], the integrand in the right hand side of ( 26) is precisely (15), hence using Proposition 3 we again arrive at (25).As a motivating application, consider two controlled linear dynamical agents with identical dynamics given by the ordinary differential equation where x i (t) ∈ R d is the state and u i (t) ∈ R m is the control input for the ith agent at time t.Suppose that the system matrices A(t), B(t) are smooth measurable functions of t, and that the initial conditions for the two agents have the same compact convex set valued uncertainty, i.e., x i (t = 0) ∈ compact convex X 0 ⊂ R d .Furthermore, suppose that the input uncertainty sets for the two systems are given by different unit norm balls such that 1 ≤ p 1 < p 2 ≤ ∞.Given these set-valued uncertainties, the "reach sets" X i t , i ∈ {1, 2}, are defined as the respective set of states each agent may reach at a given time t > 0. Specifically, for i ∈ {1, 2}, and U i given by ( 28), the reach sets are As such, there exists a vast literature [12][13][14][15][16][17][18][19][20] on reach sets and their numerical approximations.In practice, these sets are of interest because their separation or intersection often imply safety or the lack of it.It is natural to quantify the distance between reach sets or their approximations in terms of the Hausdorff distance [55][56][57], and in our context, this amounts to estimating δ X 1 t , X 2 t .Since 1 ≤ p 1 < p 2 ≤ ∞, we have the norm ball inclusion U 1 ⊂ U 2 , and consequently X 1 t ⊂ X 2 t .We next show that δ X 1 t , X 2 t is exactly of the form (23).
Theorem 7 (Hausdorff distance between linear systems' reach sets with norm ball input uncertainty) Consider the reach sets (29) with input set valued uncertainty (28).For τ ≤ t, let Φ(t, τ ) be the state transition matrix (see e.g., [58,Ch. 1.3]) associated with (27).Denote the Hölder conjugate of p 1 as q 1 , and that of p 2 as q 2 , i.e., 1/p 1 + 1/q 1 = 1 and 1/p 2 + 1/q 2 = 1.Then 1 ≤ q 2 < q 1 ≤ ∞, and the Hausdorff distance Proof We have where ∔ denotes the Minkowski sum and the second summand in ( 31) is a set-valued Aumann integral.Since the support function is distributive over the Minkowski sum, following [59, Prop.1] and (3), from (31) we find that wherein i ∈ {1, 2} and the sets U i are given by (28).Next, we follow the same arguments as in [60,Thm. 1] to simplify (32) as where q i is the Hölder conjugate of p i .Then (5) together with (33) yield (30). □ Corollary 8 Using the same notations of Theorem 7, we have Proof From (25), we obtain the estimate Recall that the norm of a linear operator is related to the norm of its adjoint via where α * , β * are the Hölder conjugates of α, β, respectively.Using this fact in (35) completes the proof.□ Remark 4 In the special case of a linear time invariant dynamics, the matrices A, B in (27) are constants and Φ(t, τ ) = exp((t − τ )A).In that case, Theorem 7 and Corollary 8 apply with these additional simplifications.
Remark 5 As t increases, we expect the bound (25) to become more conservative.Likewise, the gap between (30) and ( 34) is expected to increase with t.
Example.Consider the linearized equation of motion of a satellite [58, p. 14-15] of the form (27) with four states, two control inputs, and constant system matrices for some fixed parameter ω.The input components denote the radial and tangential thrusts, respectively.We consider two cases: the inputs have setvalued uncertainty of the form (28) with p 1 = 2 (unit Euclidean norm-bounded Fig. 5: The numerically estimated Hausdorff distance (30) and the upper bound (34) for the four state, two input linear system given in (36)., for 0 ≤ τ < t, and the integrand in the right hand side of (34) equals to the maximum singular value of the above matrix.For ω = 3 and t ∈ [0, 2], Fig. 5 shows the time evolution of the numerically estimated Hausdorff distance (30) and the upper bound (34) between the reach sets given by ( 29) with the same compact convex initial set X 0 ⊂ R 4 , i.e., between X 1 t and X 2 t resulting from the unit p 1 = 2 and p 2 = ∞ norm ball input sets, respectively.

Conclusions
In this work, we studied the Hausdorff distance between two different norm balls in an Euclidean space and derived closed-form formula for the same.In d dimensions, we provide results for the ℓ p norm balls parameterized by p where 1 ≤ p ≤ ∞, as well as for the polyhedral D-norm balls parameterized by k where 1 ≤ k ≤ d.We then investigated a more general setting: the Hausdorff distance between two convex sets obtained by transforming two different ℓ p norm balls via a given linear map.In this setting, while we do not know a general closed-form formula for an arbitrary linear map, we provide upper