A Hypersurface Containing the Support of a Radon Transform must be an Ellipsoid. I: The Symmetric Case

If the Radon transform of a compactly supported distribution f≠0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f \ne 0$$\end{document} in Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^n$$\end{document} is supported on the set of tangent planes to the boundary ∂D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\partial D$$\end{document} of a bounded convex domain D, then ∂D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\partial D$$\end{document} must be an ellipsoid. As a corollary of this result we get a new proof of a recent theorem of Koldobsky, Merkurjev, and Yaskin, which settled a special case of a conjecture of Arnold that was motivated by a famous lemma of Newton.

In other words, R f is a distribution on the manifold of lines in the plane that vanishes in the open set of lines that intersect the open unit disk. Since R f obviously vanishes on the open set of lines that are disjoint from the closed disk, it follows that the distribution R f is supported on the set of lines that are tangent to the circle. By means of an affine transformation it is easy to construct a similar example where the circle is replaced by an arbitrary ellipse. For an arbitrary ellipsoidal domain D ⊂ R n , n > 2, it is also easy to construct examples of distributions f supported in D such that the Radon transform R f is supported on the set of tangent planes to the boundary of D. However, surprisingly, for other convex domains than ellipsoids such distributions do not exist.
Since we will consider arbitrary convex-not necessarily smooth-domains, we have to replace the notion of tangent plane by supporting plane. A supporting plane for D is a hyperplane L such that L ∩ D is non-empty and one of the components of R n \ L is disjoint from D.

Theorem 1 Let D be an open, convex, bounded, and symmetric (that is D = −D)
subset of R n , n ≥ 2, with boundary ∂ D. If there exists a distribution f = 0 with support in D such that the Radon transform of f is supported on the set of supporting planes for D, then ∂ D must be an ellipsoid.
The set of tangent planes to an ellipsoid is a quadratic hypersurface in the space of hyperplanes, hence is itself an ellipsoid, as asserted in the title.
The more general case when D is not assumed to be symmetric turned out to require different arguments from those given here. This case will therefore be treated in a forthcoming article.
Remark Our arguments prove in fact a stronger statement, Theorem 3, which is local in ω but global in p; see Sect. 5.
Denote by V (ω, p) the volume of one of the connected components of D \ L(ω, p), where D ⊂ R n is a convex, bounded domain and L(ω, p) is the hyperplane x · ω = p that we assume intersects D. A famous conjecture of Arnold (Problem 1987-14 in Arnold's Problems, [3]) asserts that if V (ω, p) is an algebraic function, then n must be odd and D must be an ellipsoid. The background of Arnold's conjecture is a famous lemma in Newton's Principia and is described in [4]. The case n even has been settled long ago by Vassiliev [8], see also [7]. For odd n the question is still open. The special case when n is odd and p → V (ω, p) is assumed to be a polynomial function of degree ≤ N for all ω and some N has also been studied and was settled recently by Koldobsky, Merkurjev, and Yaskin for domains D with smooth boundary [6]; see also [1]. If the domain D is symmetric, then this question is answered by Theorem 1. In [2] the case when p → V (ω, p) is algebraic and satisfies a certain additional condition is reduced to the case p → V (ω, p) is polynomial. Corollary 1 Let D ⊂ R n , n ≥ 2, be as in Theorem 1, and assume that there exists a number N such that p → V (ω, p) is a polynomial of degree ≤ N for all ω ∈ S n−1 . Then ∂ D must be an ellipsoid.
Proof Let χ D be the characteristic function of D and choose an integer k such that 2k > N . The assumption implies that ∂ 2k and obviously (Rχ D )(ω, p) = 0 for all other p. This shows that the distribution ∂ 2k p Rχ D must be supported on the set of supporting planes to ∂ D, a hypersurface in the manifold of hyperplanes in R n . Define the distribution f in R n by f = k χ D where denotes the Laplace operator. The formula R( k h)(ω, p) = ∂ 2k p Rh(ω, p) with h = χ D now shows that the distribution R f must be supported on the set of supporting hyperplanes. By Theorem 1 this implies that ∂ D is an ellipsoid.
A somewhat related problem is treated in a recent article by Ilmavirta and Paternain [5]. It is proved that the existence of a function in L 1 (D), D ⊂ R n , whose X-ray transform (integral over lines) is constant, implies that the boundary of D is a ball.
In Sect. 2 we will write down an expression for an arbitrary distribution g on the manifold of hyperplanes that is equal to the Radon transform of some compactly supported distribution and is supported on the submanifold of supporting planes to ∂ D. In Sect. 3 we will use the description of the range of the Radon transform to write down the conditions for g to be the Radon transform of a compactly supported distribution f . Those conditions will be an infinite number of polynomial identities in the supporting function ρ(ω) for D and the densities q j (ω) that define the distribution g. Thereby the problem is transformed to a purely algebraic question. In Sect. 4 we analyze the polynomial identities and prove (Theorem 2) that they imply that the supporting function ρ(ω) must be a quadratic polynomial, which together with the fact that ρ(ω) > 0 implies that ∂ D is an ellipsoid. In Sect. 5 we finish the proof of Theorem 1 and prove the semi-local version Theorem 3. An outline of the proof of Theorem 2 is given in Sect. 4.

Distributions on the Manifold of Hyperplanes
As is well known the manifold P n of hyperplanes in R n can be identified with the manifold (S n−1 × R)/(±1), the set of pairs (ω, p) ∈ S n−1 × R, where (ω, p) is identified with (−ω, − p). Thus a function on P n can be represented as an even function g(ω, p) = g(−ω, − p) on S n−1 × R. In this article a distribution on P n will be a linear form on C ∞ e (S n−1 × R), the set of smooth even functions on S n−1 × R, and a locally integrable even function h(ω, p) on S n−1 × R will be identified with the distribution where dω is area measure on S n−1 . Using the standard definition of R * , we can then define the Radon transform of the compactly supported distribution f on R n as the linear form Let D be a bounded, convex subset of R n with boundary ∂ D. Here we will also assume that D is symmetric with respect to some point, which we may assume to be the origin, so D = −D. We shall denote the supporting function of D by ρ(ω), that is Since D is symmetric, ρ will be an even function, because Clearly a hyperplane x · ω = p intersects D if and only if | p| < ρ(ω), and it is a supporting plane to ∂ D if and only if p = ±ρ(ω). We shall consider the hypersurface in P n that consists of all the supporting planes to ∂ D. Since the origin in R n is contained in (the interior of) D, none of the supporting planes can contain the origin, hence ρ(ω) > 0 for all ω.
A distribution of order 0 that is supported on the set of supporting planes to D can therefore be represented as for some measures q + (ω) and q − (ω) on S n−1 ; here δ(·) is the Dirac measure at the origin in R. Since ρ(−ω) = ρ(ω) and δ(·) is even we have Since g must be even, g(ω, p) = g(−ω, − p), this shows that we must have q − (−ω) = q + (ω). Denoting q + (ω) by q 0 (ω) we can therefore write for some measure q 0 (ω). We next show that we may assume that the distribution f is even, f (x) = f (−x), which implies that g = R f is even in ω and p separately.

Lemma 1 Assume that there exists a compactly supported distribution f = 0 such that R f is supported on p = ±ρ(ω). Then there exists an even distribution with the same property.
Proof Let f = 0 be such that R f is supported on p = ±ρ(ω). We have to construct an even distribution with the same property. It is clear that the distribution f (−x) has the same property. Hence the even part ( f (x) + f (−x))/2 and the odd part )/2 of f both have the same property. If the even part is different from zero there is nothing more to prove, so we may assume that the odd part is different It remains to prove that Rh 1 is supported on p = ±ρ(ω). But this follows from the formula R(∂ x 1 h)(ω, p) = ω 1 ∂ p Rh(ω, p), which is easily seen by application of the formula Rϕ(ω, τ ) = ϕ(τ ω) to both members.
From now on we will therefore assume that the distribution f is even, which implies that g(ω, p) = R f (ω, p) is even in ω and p separately. This implies that the measure q 0 in (1) must be even, so we may write for some q 0 (ω).
If the boundary ∂ D is smooth and hence ρ(ω) is smooth, we can argue similarly, using the fact that δ ( j) (·) is odd if j is odd and even if j is even, to see that an arbitrary distribution g(ω, p) that is even in ω and p separately and is supported on p = ±ρ(ω) can be written for some distributions q 0 (ω), . . . , q m−1 (ω) on S n−1 . But if ρ(ω) is not smooth, this is not always true. Note that δ ( j) ( p ± ρ(ω)) should be interpreted as the jth distribution derivative of δ( p ± ρ(ω)) with respect to p. However, if g = R f for some compactly supported distribution f , then we shall see that the representation (3) is valid and that the distributions q j (ω) must be continuous functions.
Lemma 2 Let f be a compactly supported even distribution in R n and let g = R f . Assume that g is supported on the set of supporting planes to D. Then there exists a number m and continuous functions q j (ω) such that the distribution g can be written in the form (3).
We note that the map ω → R ω f must be continuous in the sense that ω → R ω f , ψ is continuous for every test function ψ ∈ C ∞ (R). R f can be expressed in terms of To prove the second last identity we replace the integrals by Riemann sums and observe that the function x → ϕ 1 (x·ω) together with all its derivatives depends continuously on ω. The formula (4) shows that if g = R f is supported on the hypersurface p = ±ρ(ω), then R ω f must be supported on the union of the two points p = ±ρ(ω) for every ω.
Hence R ω f can be represented as the right-hand side of (3) for every ω. It remains only to prove that all q j (ω) are continuous. It is enough to prove that q j (ω) is continuous in some neighborhood of an arbitrary We have seen that the expression on the right-hand side must be a continuous function of ω for every ψ. Choosing ψ( p) such that ψ( p) = 1 in a neighborhood of ρ(ω 0 ) shows that q 0 (ω) is continuous at ω 0 . Next choosing ψ( p) such that ψ( p) = p in a neighborhood of ρ(ω 0 ) shows that q 1 (ω) is continuous. Continuing in this way completes the proof.
Our next goal will be to write down the conditions on q j (ω) and ρ(ω) for g(ω, p) to belong to the range of the Radon transform.

The Range Conditions
It is well known that a compactly supported (ω, p)-even function or distribution g(ω, p) belongs to the range of the Radon transform if and only if the function is the restriction to the unit sphere of a homogeneous polynomial of degree k in ω for every non-negative integer k.
We next compute the moments R g(ω, p) p k d p for the expression (3). By the definition of δ ( j) , for any a ∈ R, if k is even. For arbitrary non-negative integers k, j we define the constant c k, j by c 0,0 = 1 and Note that the second expression on the first line above makes sense also for j > k and is equal to zero then, although the first expression does not make sense if j > k. For instance, if j = 2, then c k, j = k(k − 1) for all k. Then we can now summarize our computations as follows: if g(ω, p) is defined by (3), Thus, for g(ω, p) to be the Radon transform of a compactly supported distribution it is necessary and sufficient that is equal to the restriction to S n−1 of a homogeneous polynomial for every even k.
In the next section we will show that those conditions imply that ρ(ω) 2 must be a quadratic polynomial. The fact that ρ(ω) 2 is a quadratic polynomial, combined with the fact that ρ(ω) is everywhere positive on S n−1 , implies that ∂ D, the boundary of the region D, is an ellipsoid. Indeed, if ρ(ω) is also rotationally invariant, ρ(ω) = c|ω| 2 , then it is obvious that D must be rotationally invariant, hence a ball. And since ρ(ω) is (strictly) positive we can find an affine transformation A such that ρ(Aω) 2 = c|ω| 2 for some c. This implies that D must be an affine image of a ball, hence an ellipsoid.

Analysis of the Polynomial Identities
The purpose of this section is to prove the following purely algebraic result. We shall denote the set of restrictions to the unit sphere of homogeneous polynomials of degree k by P k . Theorem 2 Assume that the strictly positive and continuous function ρ(ω) on S n−1 and the continuous functions q 0 , q 1 , . . . , q m−1 , not all zero, satisfy the infinitely many identities where c k, j is defined by (5). Then ρ(ω) 2 is a (not identically vanishing) quadratic polynomial.
In (8) we have omitted the factor (−1) j that occurred in (7), because in the proof of Theorem 1 we can of course apply Theorem 2 to the functions (−1) j q j .
For instance, if m = 3 the first few equations (8) read The first step of the proof of Theorem 2 will be to eliminate the m functions q j from sets of m + 1 of Eq. (8). The result is a set of infinitely many polynomial identities in ρ(ω) 2 with the polynomials p k as coefficients, as will be explained in Lemma 4. Considering a set of m of those identities as a linear system of equations in the m quantities ρ 2 , ρ 4 , . . . , ρ 2m we can solve ρ 2 as a rational function in the coefficients p 2k and hence as a rational function of ω, provided the determinant of the corresponding coefficient matrix (17) is different from zero. Under the same assumption we can prove rather easily that ρ 2 must be a polynomial by considering sets of m such linear systems together. This entails considering the translation operator , p 2k+4 , . . . , p 2k+2m ) on m-vectors of polynomials from the infinite sequence p 0 , p 2 , . . .. This operator is given by the matrix S introduced below (18). The matrix S has m identical eigenvalues ρ 2 , hence its determinant is ρ 2m . The crucial fact that the matrix (17) is non-singular (Proposition 1) is an easy consequence of the fact that Jordan canonical form of S consists of just one Jordan block (Lemma 6).

Proof Introduce the polynomials
Then c 2k, j = h j (2k). Since for any j for some constants a ν , it is clear that any matrix of the form with all t i distinct can be transformed to a van der Monde matrix by elementary row operations, hence its determinant must be different from zero. Lemma 3 shows that an arbitrary set of m consecutive rows C 2k = (c 2k,0 , . . . , c 2k,m−1 ), k = r , r + 1, . . . , r + m − 1, from the matrix c 2k, j forms a linearly independent set of m-vectors. Therefore it is clear that an arbitrary row C 2(r +m) with r ≥ 0 can be expressed as a linear combination of the m preceding rows. This is made precise in the next lemma.
This completes the proof of Lemma 4.
For instance, if m = 3 the first few of the identities (13) read The coefficients r k satisfy the identity And if we introduce the translation operator T , defined by T p 2k = p 2k+2 , on the infinite sequence of polynomials ( p 0 , p 2 , p 4 , . . .), then (12) can be written A natural start towards a proof of Theorem 2 would be to try to solve ρ 2 from some set of m of Eq. (12) considering the equations as linear expressions in the m unknowns ρ 2 , . . . , ρ 2m . If the matrix is non-singular, we can solve ρ 2 from the linear system (12) with r = 0, 1, . . . , m − 1 and obtain ρ 2 as a rational function where F and G are polynomials and G = det A 0 . As we shall see below (Lemma 5) it is easy to strengthen this argument by considering m such systems together and thereby prove that ρ 2 is a polynomial. Therefore our main task in the rest of the proof of Theorem 2 will be to prove that the matrix A 0 is non-singular.

Proposition 1 Let the polynomials p 2k be defined as in Theorem 2 and assume that the function q m−1 (ω) is not identically zero. Then the matrix (17) is non-singular.
The proof will be given at the end of this section. Using Proposition 1 we can now easily finish the proof of Theorem 2. Denote by C(ω) the field of rational functions in ω = (ω 1 , . . . , ω n ), and denote by C(ω) m the m-dimensional vector space of m-tuples of elements from C(ω). Introduce the column m-vectors in C(ω) m P 0 = ( p 0 , p 2 , . . . , p 2m−2 ) t , P 2 = ( p 2 , p 4 , . . . , p 2m ) t , The recurrence relations (13) then show that the translation operator P 2k → P 2k+2 is given by the matrix so that S P 2k = P 2k+2 and S k P 0 = P 2k for all k.
Proof We have already seen that ρ(ω) 2 must be a rational function if the matrix (17) is non-singular. The equations P 2k+2 j = S k P 2 j for j = k, k + 1, . . . , k + m − 1 can be combined to the matrix equation where A k is the matrix Denote the determinant of A 0 , which is a polynomial, by d(ω).
is a polynomial for every k. Since ρ 2 is a rational function, this proves that ρ 2 must in fact be a polynomial as claimed.
We now turn to the proof of Proposition 1. To motivate the next lemma we make the following observations. Let B = (b k, j ) be the matrix of the system (8), with m columns and infinitely many rows, and let B 0 be the uppermost m × m minor of the matrix B obtained by restricting k to 0 ≤ k ≤ m − 1. Introducing the column m-vector Q = (q 0 , q 1 , q 2 , . . . , q m−1 ) t we then have B 0 Q = P 0 . We want to prove that the vectors P 0 , S P 0 , . . . , S m−1 P 0 span C(ω) m .
Since B 0 Q = P 0 , those vectors can be written B 0 Q, S B 0 Q, . . . , S m−1 B 0 Q. And since B 0 is non-singular, (21) is equivalent to Therefore we now study the matrix B −1 0 S B 0 .

Lemma 6
The matrix B −1 0 S B 0 is an upper triangular matrix of the form where N = (n k, j ) is a nilpotent, upper triangular matrix whose elements next to the diagonal are given by For instance, if m = 5, The exact expression for the matrix N is inessential for us, apart from the fact that all the entries (22) are different from zero.

Proof of Lemma 6
Denote by u 0 , . . . , u m−1 the column vectors of B 0 . The assertion of the lemma is that the matrix of S with respect to the basis u 0 , . . . , u m−1 is ρ 2 I + N . In fact we shall prove that The components of Denote by B 1 the second uppermost m × m minor of the matrix B, which is obtained by restricting k in (20) to 1 ≤ k ≤ m. The argument in the proof of Corollary 2 shows that S B 0 = B 1 , in other words if we define u j m as b m, j = c 2m, j ρ 2m− j . Denote by D the formal derivative with respect to ρ. Note that u 0 k = ρ 2k , u 1 k = Dρ 2k for 0 ≤ k ≤ m, and that more generally u j k = D j ρ 2k for 0 ≤ j ≤ m − 1, 0 ≤ k ≤ m. The identity (23) is obvious. To prove (24) we just note that the kth component of Su 1 satisfies For (25) we use Leibnitz' formula to get which completes the proof.
shows that the m vectors A k z, k = 0, 1, . . . , m − 1 are linearly dependent. The sufficiency follows from the fact that the Jordan normal form of A (over the algebraic closure of K ) must consist of just one Jordan block, but can also be seen more directly as follows. Introduce the sequence of subspaces Here we have used the standard notation K w to denote the 1-dimensional subspace of K m that is generated by the vector w ∈ K m . Since Arguing similarly we easily see that dim(L k ) = dim(L k−1 ) + 1 for each k ≤ m and hence dim(L m ) = m, which implies the assertion.
Proof of Proposition 1 By Lemma 7 it is enough to prove that (B −1 0 S B 0 −ρ 2 I ) m−1 Q = 0. Using Lemma 6 we can write The matrix N m−1 has only one non-vanishing entry in the upper right corner. The value of this entry is equal to the product of all the entries on the diagonal next to the main diagonal described in Lemma 6. And this product is equal to c = (2ρ) m−1 (m − 1)!, hence different from zero. Recalling that Q = (q 0 , q 1 , . . . , q m−1 ) t we can now conclude that By assumption the continuous function q m−1 (ω) is different from zero on some open subset of S n−1 . Thus we have proved that the determinant of the matrix (17) must be different from zero on some open set. But the determinant is a polynomial function, hence equal to a non-zero polynomial. This completes the proof of Proposition 1.

End of proof of Theorem 1
Let m be the largest number for which the coefficient q m−1 in (3) is not identically zero. Then Proposition 1 shows that the matrix (17) must be non-singular. And then Lemma 5 shows that ρ(ω) 2 must be a positive quadratic polynomial. This completes the proof of Theorem 2 and hence of Theorem 1.

A Semi-local Result
The arguments given here prove in fact a semi-local version of Theorem 1, where only an arbitrary open set of ω comes into play, but all p ∈ R. A set W of hyperplanes L ∈ P n is called translation invariant, if L ∈ W implies that every translate x + L is contained in W . This theorem implies Theorem 1, because if the assumptions of Theorem 1 are fulfilled, then the assumptions of Theorem 3 are valid for every x 0 ∈ ∂ D, hence ∂ D must be equal to an ellipsoid in some neighborhood of every point, hence be globally equal to an ellipsoid.

Proof of Theorem 3
Let f be as in the theorem and set R f = g. By the range conditions the functions S n−1 ω → R g(ω, p) p k d p must belong to P k for all k. Set p 0 = x 0 · ω 0 . The assumptions imply that the restriction of g to some neighborhood of (±ω 0 , ± p 0 ) has the form (3). This shows that the assumption (8) of Theorem 2 must be valid for ω in some neighborhood E of ±ω 0 . Note that the functions q j are defined only in some neighborhood of ±ω 0 , whereas ρ(ω), the supporting function of D, is initially defined in all of S n−1 . But in this proof we are only concerned with the restriction of ρ to E. Taking restriction to E wherever relevant we see that the proofs of Corollary 2, Proposition 1, Lemma 5, and Lemma 6, work without change, so the restriction of ρ 2 to E must be a positive quadratic polynomial, and this implies the assertion.