Skip to main content
Log in

Recovering finite parametric distributions and functions using the spherical mean transform

  • Published:
Analysis and Mathematical Physics Aims and scope Submit manuscript

Abstract

The aim of the article is to recover a certain type of finite parametric distributions and functions using their spherical mean transform which is given on a certain family of spheres whose centers belong to a finite set \(\Gamma \). For this, we show how the problem of reconstruction can be converted to a Prony’s type system of equations whose regularity is guaranteed by the assumption that the points in the set \(\Gamma \) are in general position. By solving the corresponding Prony’s system we can extract the set of parameters which define the corresponding function or distribution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Agranovsky, M., Volchkov, V.V., Zalcman, L.A.: Conical uniqueness sets for the spherical Radon transform. Bull. Lond. Math. Soc. 31, 231–236 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  2. Antipov, Y.A., Estrada, R., Rubin, B.: Inversion formulas for spherical means in constant curvature spaces. J. D’Anal. Math. 118, 623–656 (2012)

    Article  MATH  Google Scholar 

  3. Appledorn, C.R., Fang, Y.R., Kruger, R.A., Liu, P.: Photoacoustic ultrasound (PAUS)reconstruction tomography. Med. Phys. 22, 1605–1609 (1995)

    Article  Google Scholar 

  4. Batenkov, D., Sarig, N., Yomdin, Y.: An algebraic reconstruction of piecewisesmooth functions from integral measurements. Funct. Differ. Equ. 19(1–2), 13–30 (2012)

    MathSciNet  MATH  Google Scholar 

  5. Beltukov, A.: Inversion of the spherical mean transform with sources on a hyperplane. arXiv:0919.1380v1 (2009)

  6. Blu, T., Marziliano, P., Vetterli, M.: Sampling signals with finite rate of innovation. IEEE Trans. Sig. Process. 50(6), 1417–1428 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  7. Blu, T., Dragotti, P., Vetterli, M.: Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets Strang-Fix. IEEE Trans. Sig. Process. 55(5), 1741–1757 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cheney, M., Nolan, C.J.: Synthetic aperture inversion. Inverse Probl. 18, 221–235 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. Elad, M., Golub, G., Milanfar, P.: Shape from moments—an estimation theory perspective. IEEE Trans. Sig. Process. 52, 1814–1829 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  10. Eldar, Y.C., Gedalyahu, K., Tur, R.: Multichannel sampling of pulse streams at the rate of innovation. IEEE Trans. Sig. Process. 59(4), 1491–1504 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Finch, D., Patch, S., Rakesh.: Determining a function from its mean values over a family of spheres. SIAM J. Math. Anal. 35(5), 1213–1240 (2004)

  12. Finch, D., Haltmeier, M., Rakesh.: Inversion of spherical means and the wave equation in even dimensions. SIAM J. Appl. Math. 68(2), 392–412 (2007)

  13. Golub, G., Gustafsson, B., Milanfar, P., Putinar, M., Varah, J.: Shape reconstruction from moments: theory, algorithms, and applications. In: Luk, F.T. (ed.) SPIE Proceedings, Advanced Signal Processing, Algorithms, Architecture, and Implementations X, vol. 4116, pp. 406–416 (2000)

  14. Gravin, N., Lasserre, J., Pasechnik, D.V., Robins, S.: The inverse moment problem for convex polytopes. Discret. Comput. Geom. 48(3), 596–621 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gustafsson, B., He, C., Milanfar, P., Putinar, M.: Reconstructing planar domains from their moments. Inverse Probl. 16(4), 1040–1053 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  16. Haltmeier, M.: Inversion of circular means and the wave equation on convex planar domains. Comput. Math. Appl. 65(7), 1025–1036 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Haltmeier, M.: Universal inversion formulas for recovering a function from spherical means. SIAM J. Math. Anal. 46(1), 214–232 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Karl, W.C., Milanfar, P., Verghese, G.C., Willsky, A.S.: Reconstructing polygons from moments with connections to array processing. IEEE Trans. Sig. Process. 43(2), 432–443 (1995)

    Article  Google Scholar 

  19. Kunyansky, L.A.: Explicit inversion formulae for the spherical mean Radon transform. Inverse Probl. 23(1), 373–383 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  20. Louis, A.K., Quinto, E.T.: Local tomographic methods in Sonar. In: Surveys on Solution Methods for Inverse Problems, pp. 147–154. Springer, Vienna (2000)

  21. Maravic, I., Vetterli, M.: Exact sampling results for some classes of parametric nonbandlimited 2-D signals. IEEE Trans. Sig. Process. 52(1), 175–189 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  22. Maravic, I., Vetterli, M.: Sampling and reconstruction of signals with finite rate of innovation in the presence of noise. IEEE Trans. Sig. Process. 53(8 Part 1), 2788–2805 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  23. Narayanan, E.K., Rakesh.: Spherical means with centers on a hyperplane in even dimensions. Inverse Probl. 26(3), 035014 (2010). (12 pp)

  24. Natterer, F.: Photo-acoustic inversion in convex domains. Inverse Probl. Imaging 2, 315–320 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  25. Norton, S.J.: Reconstruction of a two-dimensional reflecting medium over a circular domain: exact solution. J. Acoust. Soc. Am. 67, 1266–1273 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  26. Palamodov, V.P.: A uniform reconstruction formula in integral geometry. Inverse Probl. 6, 065014 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  27. Palamodov, V.: Time reversal in photoacoustic tomography and levitation in a cavity. Inverse Probl. 30, 125006 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  28. Patch, S.K.: Thermoacoustic tomography-consistency conditions and the partial scan problem. Phys. Med. Biol. 49, 111 (2004)

    Article  Google Scholar 

  29. Peter, T., Potts, D., Tasche, M.: Nonlinear approximation by sums of exponentials and translates. SIAM J. Sci. Comput. 33(4), 1920–1947 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. Sarig, N., Yomdin, Y.: Signal acquisition from measurements via non-linear models. C. R. Math. Rep. Acad. Sci. Can. 29(4), 97–114 (2007)

    MathSciNet  MATH  Google Scholar 

  31. Volchkov, V.V.: Integral Geometry and Convolution Equations. Kluwer, Dordrecht (2003)

    Book  MATH  Google Scholar 

  32. Wang, L.V., Xu, M.: Universal back-projection algorithm for photoacoustic computed tomography. Phys. Rev. E 71(1), 016706 (2005)

    Article  Google Scholar 

Download references

Acknowledgements

The author would like to thank Professor Yosef Yomdin from Weizmann Institute of Science for his useful comments and suggestions during the writing of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yehonatan Salman.

Ethics declarations

Conflict of interest

The author declare that they have no competing interests.

Appendix

Appendix

Lemma 5.1

Let g be a function, defined on \({\mathbb {R}}^{+}\), such that its radial extension belongs to the Schwartz space \(\mathrm {S}\left( {\mathbb {R}}^{n}\right) \). Then g and its Hankel transform G are continuously differentiable and belong to \(L^{1}\left( {\mathbb {R}}^{+}, r^{\frac{n - 1}{2}}\right) \).

Proof

Denote by \(g_{0}\) and \(G_{0}\) respectively the radial extensions of g and G to \({\mathbb {R}}^{n}\). Using a spherical coordinates system in \({\mathbb {R}}^{n}\), we have

$$\begin{aligned} \int _{|x| > 1}g_{0}(x)|x|^{-\frac{n - 1}{2}}dx = \int _{1}^{\infty }\int _{{\mathbb {S}}^{n - 1}}g_{0}(r\theta )r^{\frac{n - 1}{2}}d\theta dr = \frac{2\pi ^{\frac{n}{2}}}{\Gamma \left( \frac{n}{2}\right) }\int _{1}^{\infty }g(r)r^{\frac{n - 1}{2}}dr. \end{aligned}$$

Since \(g_{0}\) belongs to \(\mathrm {S}\left( {\mathbb {R}}^{n}\right) \) it follows that the integral in the left hand side converges and hence also the integral in the right hand side. Since g is also bounded on the interval [0, 1] (since \(g_{0}\) is bounded in the unit disk), it follows that g is in \(L^{1}\left( {\mathbb {R}}^{+}, r^{\frac{n - 1}{2}}dr\right) \). Denote by \(\widehat{g}_{0}\) the Fourier transform of g, then using spherical coordinates again we have

$$\begin{aligned} \widehat{g}_{0}(\omega ) = \frac{1}{(2\pi )^{\frac{n}{2}}}\int _{{\mathbb {R}}^{n}}g_{0}(x)e^{-i\langle \omega , x\rangle }dx = \frac{1}{(2\pi )^{\frac{n}{2}}}\int _{0}^{\infty }\int _{{\mathbb {S}}^{n - 1}}g_{0}(r\theta )e^{-ir\langle \omega , \theta \rangle }d\theta r^{n - 1}dr. \end{aligned}$$

Denoting \(\omega = \lambda \psi \), where \(\lambda = |\omega |, \psi = \frac{\omega }{|\omega |}\) and using the identity

$$\begin{aligned} \int _{{\mathbb {S}}^{n - 1}}e^{-it\left\langle \psi , \theta \right\rangle } d\theta = (2\pi )^{\frac{n}{2}}j_{\frac{n}{2} - 1}(t) \end{aligned}$$

it follows that

$$\begin{aligned} \widehat{g}_{0}(\lambda \psi ) = \int _{0}^{\infty }g(r)j_{\frac{n}{2} - 1}(\lambda r)r^{n - 1}dr = G(\lambda ). \end{aligned}$$

Hence \(\widehat{g}_{0}\) is the radial extension of G. Since the Fourier transform maps the Schwartz space \(\mathrm {S}\left( {\mathbb {R}}^{n}\right) \) onto itself and since \(g_{0}\) is in \(\mathrm {S}\left( {\mathbb {R}}^{n}\right) \), it follows that the same is true for the radial extension of G. Hence, from the same arguments we used for g and its radial extension, it follows that G belongs to \(L^{1}\left( {\mathbb {R}}^{+}, r^{\frac{n - 1}{2}}dr\right) \).

The assertion that g is continuously differentiable follows immediately from the fact that its radial extension if infinitely differentiable and the same is true for G. \(\square \)

Lemma 5.2

Let \(n\ge 2\) and \(\lambda _{1},\ldots ,\lambda _{n}\) be n real numbers satisfying \(\lambda _{i}\ne \lambda _{j}, i\ne j\). Let \(\sigma :\{1,2,\ldots ,n\}\rightarrow \{1,2,\ldots ,n\}\) be a permutation which is different from the identity. Define the following matrix

$$\begin{aligned} M = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \lambda _{1} - \lambda _{\sigma (1)} &{} \lambda _{2} - \lambda _{\sigma (2)} &{} \ldots &{} \lambda _{n} - \lambda _{\sigma (n)}\\ \lambda _{1}^{2} - \lambda _{\sigma (1)}^{2} &{} \lambda _{2}^{2} - \lambda _{\sigma (2)}^{2} &{} \ldots &{} \lambda _{n}^{2} - \lambda _{\sigma (n)}^{2}\\ \ldots \ldots \ldots \ldots \ldots \\ \lambda _{1}^{n} - \lambda _{\sigma (1)}^{n} &{} \lambda _{2}^{n} - \lambda _{\sigma (2)}^{n} &{} \ldots &{} \lambda _{n}^{n} - \lambda _{\sigma (n)}^{n} \end{array} \right) . \end{aligned}$$
(5.1)

Then, if \(v = (v_{1},\ldots ,v_{n})\in {\mathbb {R}}^{n}\) satisfies \(Mv^{T} = 0\) then there are two indices i and j, \(i\ne j\) such that \(v_{i} = v_{j}\).

Proof

The proof is by induction on \(n\ge 2\). Since \(\sigma \) is different from the identity then for \(n = 2\) we have \(\sigma (1) = 2\) and \(\sigma (2) = 1\). In this case the vector \(v = (1,1)\) is in the kernel of the matrix M as defined by (5.1) and indeed this vector has two equal components with different indices. There are no other independent vectors in the kernel of M since otherwise \(M = 0\) which will imply that \(\lambda _{1} = \lambda _{2}\).

Now we assume the induction hypothesis for every integer m satisfying \(2\le m \le n - 1\) and our aim is to prove it for the integer n where we assume that \(n\ge 3\).

First observe that we can assume that \(\sigma \) does not have any fixed points. Indeed, suppose without loss of generality that \(\sigma (n) = n\), then the matrix M has the following form

$$\begin{aligned} M = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \lambda _{1} - \lambda _{\sigma (1)} &{} \lambda _{2} - \lambda _{\sigma (2)} &{} \ldots &{} \lambda _{n - 1} - \lambda _{\sigma (n - 1)} &{} 0\\ \lambda _{1}^{2} - \lambda _{\sigma (1)}^{2} &{} \lambda _{2}^{2} - \lambda _{\sigma (2)}^{2} &{} \ldots &{} \lambda _{n - 1}^{2} - \lambda _{\sigma (n - 1)}^{2} &{} 0\\ \ldots \ldots \ldots \ldots \ldots \\ \lambda _{1}^{n - 1} - \lambda _{\sigma (1)}^{n - 1} &{} \lambda _{2}^{n - 1} - \lambda _{\sigma (2)}^{n - 1} &{} \ldots &{} \lambda _{n - 1}^{n - 1} - \lambda _{\sigma (n - 1)}^{n - 1} &{} 0\\ \lambda _{1}^{n} - \lambda _{\sigma (1)}^{n} &{} \lambda _{2}^{n} - \lambda _{\sigma (2)}^{n} &{} \ldots &{} \lambda _{n - 1}^{n} - \lambda _{\sigma (n - 1)}^{n} &{} 0 \end{array} \right) = \left( \begin{array}{c@{\quad }c} A &{} \overline{0}\\ * &{} 0 \end{array} \right) , \end{aligned}$$

where A is the \((n - 1)\times (n - 1)\) top left sub matrix of M. Hence, if \(Mv^{T} = 0\) then v has the form \(v = (v_{0}, *)\) where \(v_{0}\in {\mathbb {R}}^{n - 1}\) is in the kernel of A. Now we can use our induction hypothesis on the matrix A. Indeed, if \(\widetilde{\sigma }\) denotes the restriction of \(\sigma \) to the set \(\{1,2,\ldots ,n - 1\}\) then since \(\sigma (n) = n\) it follows that \(\widetilde{\sigma }(\{1,\ldots ,n - 1\}) = \{1,\ldots ,n - 1\}\) and since \(\sigma \) is different from the identity then the same is true for \(\widetilde{\sigma }\). Hence, using the induction hypothesis on the matrix A is follows that \(v_{0}\) has at least two equal components with different indices and thus the same is true for v. \(\square \)

Now, with the assumption that \(\sigma \) does not have any fixed points it follows that the set \(\{1,2,\ldots ,n\}\) has a partition into mutually disjoint sets \(A_{1},\ldots ,A_{k}\) which satisfy the following conditions:

  1. (i)

    \(\left| A_{i}\right| \ge 2, 1\le i\le k\).

  2. (ii)

    \(\sigma \left( A_{i}\right) = A_{i}, 1\le i\le k\).

  3. (iii)

    For every \(1\le i\le k\) and every non-empty subset \(B\subsetneqq A_{i}\) we have \(\sigma (B)\ne B\).

We will distinguish between the cases when \(k = 1\) and \(k \ge 2\).

1.1 The case \(k = 1\)

If \(k = 1\) we will show that the kernel of M is spanned by the vector \(v = (1,1,\ldots ,1)\). Indeed, suppose that \(u = (u_{1},\ldots ,u_{n})\) is in the kernel of M, we can assume without loss of generality that \(u_{n} = 0\) (since otherwise we can replace u with \(u - u_{n}v\)). We will show that \(u = \overline{0}\). Since u is in the kernel of M we have

$$\begin{aligned} Mu= & {} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \lambda _{1} - \lambda _{\sigma (1)} &{} \ldots &{} \lambda _{n - 2} - \lambda _{\sigma (n - 2)} &{} \lambda _{n - 1} - \lambda _{\sigma (n - 1)} &{} \lambda _{n} - \lambda _{\sigma (n)}\\ \lambda _{1}^{2} - \lambda _{\sigma (1)}^{2} &{} \ldots &{} \lambda _{n - 2}^{2} - \lambda _{\sigma (n - 2)}^{2} &{} \lambda _{n - 1}^{2} - \lambda _{\sigma (n - 1)}^{2} &{} \lambda _{n}^{2} - \lambda _{\sigma (n)}^{2}\\ \ldots \ldots \ldots \ldots \ldots \\ \lambda _{1}^{n} - \lambda _{\sigma (1)}^{n} &{} \ldots &{} \lambda _{n - 2}^{n} - \lambda _{\sigma (n - 2)}^{n} &{} \lambda _{n - 1}^{n} - \lambda _{\sigma (n - 1)}^{n} &{} \lambda _{n}^{n} - \lambda _{\sigma (n)}^{n} \end{array} \right) \left( \begin{array}{c} u_{1}\\ \ldots \\ u_{n - 2}\\ u_{n - 1}\\ 0 \end{array} \right) \nonumber \\= & {} \left( \rho _{1} - \rho _{\sigma (1)}\right) u_{1} + \cdots + \left( \rho _{n - 2} - \rho _{\sigma (n - 2)}\right) u_{n - 2} + \left( \rho _{n - 1} - \rho _{\sigma (n - 1)}\right) u_{n - 1} = \overline{0}\nonumber \\ \end{aligned}$$
(5.2)

where

$$\begin{aligned} \rho _{i} = \left( \lambda _{i},\lambda _{i}^{2}\ldots ,\lambda _{i}^{n}\right) ^{T},\quad 1 \le i \le n. \end{aligned}$$
(5.3)

For the case \(k = 1\) condition (iii) implies that \(\sigma \left( \{1,2,\ldots ,n - 1\}\right) \ne \{1,2,\ldots ,n - 1\}\). Hence, we can assume without loss of generality that \(\sigma (k)\ne n - 1, 1 \le k \le n - 1\). Hence the vectors \(\rho _{n - 1}\) and \(\rho _{n}\) appear only once in the brackets of Eq. (5.2) and thus from Eq. (5.2) we have

$$\begin{aligned} a_{1}\rho _{1} + \cdots + a_{n - 2}\rho _{n - 2} + u_{n - 1}\rho _{n - 1} - u_{\sigma ^{- 1}(n)}\rho _{n} = \overline{0} \end{aligned}$$
(5.4)

where \(a_{1},\ldots ,a_{n - 2}\) are constants which depend on \(u_{1},\ldots ,u_{n - 1}\). Now observe that if non of the vectors \(\rho _{1},\ldots ,\rho _{n}\) is equal to zero then by Eq. (5.2) it follows that they are linearly independent since they form the n columns of an \(n\times n\) Vandermonde matrix and by our assumption \(\lambda _{i}\ne \lambda _{j}\) if \(i\ne j\). If one of these vectors is equal to zero then the other are different from zero and thus they are linearly independent since they form a sub matrix of the Vandermonde matrix. Hence since at least one of the vectors \(\rho _{n - 1}\) or \(\rho _{n}\) is different from zero it follows from Eq. (5.3) that either \(u_{n - 1} = 0\) or \(u_{\sigma ^{-1}(n)} = 0\).

Continuing in this way, assume that after \(n - k - 1\) steps we can prove that \(n - k - 1\) coefficients from the set \(\left\{ u_{1},\ldots ,u_{n - 1}\right\} \) are equal to zero. Then from Eq. (5.2) we will have

$$\begin{aligned} \left( \rho _{i_{1}} - \rho _{\sigma (i_{1})}\right) u_{i_{1}} + \cdots + \left( \rho _{i_{k - 1}} - \rho _{\sigma (i_{k - 1})}\right) u_{i_{k - 1}} + \left( \rho _{i_{k}} - \rho _{\sigma (i_{k})}\right) u_{i_{k}} = \overline{0}. \end{aligned}$$
(5.5)

Since \(\sigma \left( \left\{ i_{1},\ldots ,i_{k}\right\} \right) \ne \left\{ i_{1},\ldots ,i_{k}\right\} \) we can assume, without loss of generality, that \(\sigma (i_{p})\ne i_{k}, 1\le p\le k\). Also, there exists an index l, different from any of the indices \(i_{1},\ldots ,i_{k}\), such that \(l\in \sigma \left( \left\{ i_{1},\ldots ,i_{k}\right\} \right) \). Hence the vectors \(\rho _{i_{k}}\) and \(\rho _{l}\) appear only once in the brackets of Eq. (5.4) and thus from Eq. (5.4) we have

$$\begin{aligned}&b_{1}\rho _{1} + \cdots + b_{i_{k} - 1}\rho _{i_{k} - 1} + u_{i_{k}}\rho _{i_{k}} + b_{i_{k} + 1}\rho _{i_{k + 1}}\\&\quad + \cdots + b_{l - 1}\rho _{l - 1} - u_{\sigma ^{-1}(l)}\rho _{l} + b_{l + 1}\rho _{l + 1} + \cdots + b_{n}\rho _{n} = \overline{0} \end{aligned}$$

where \(b_{j}, 1\le j\le n, j\ne i_{k}, j\ne \sigma ^{-1}(l)\) are constants which depend on \(u_{i_{1}},\ldots , u_{i_{k}}\). Since at least one of the vectors \(\rho _{l}\) or \(\rho _{i_{k}}\) is different from zero we can conclude, exactly as before, that either \(u_{i_{k}} = 0\) or \(u_{\sigma ^{-1}(l)} = 0\).

Hence after \(n - 1\) steps we can conclude that \(u_{1} = 0,\ldots ,u_{n - 1} = 0\).

1.2 The case \(k \ge 2\)

For the case \(k\ge 2\) the set \(\{1,2,\ldots ,n\}\) has a partition into disjoints subsets \(A_{1},\ldots ,A_{k}\) which satisfy conditions (i), (ii) and (iii). Since at most one of the vectors \(\rho _{1},\ldots ,\rho _{n}\) is equal to zero and \(k\ge 2\) we can assume, without loss of generality, that all the vectors with indices in the set \(A_{1}\) are different from zero and again without loss of generality we can assume that

$$\begin{aligned} A_{1} = \{1,\ldots ,m\},\quad 2\le m\le n - 2. \end{aligned}$$

Now assume again that \(u = (u_{1},\ldots ,u_{n})\) is in the kernel of M, we will show that there are two different indices, i and j, such that \(u_{i} = u_{j}\). From the definition of the matrix M it follows that

$$\begin{aligned}&\left( \rho _{1} - \rho _{\sigma (1)}\right) u_{1} + \cdots + \left( \rho _{m} - \rho _{\sigma (m)}\right) u_{m} + \left( \rho _{m + 1} - \rho _{\sigma (m + 1)}\right) u_{m + 1} \\&\quad + \cdots + \left( \rho _{n} - \rho _{\sigma (n)}\right) u_{n} =\overline{0}. \end{aligned}$$

The last equation can be rewritten as

$$\begin{aligned} (u_{1} - u_{\sigma ^{-1}(1)})\rho _{1} + \cdots + (u_{m} - u_{\sigma ^{-1}(m)})\rho _{m} + c_{m + 1}\rho _{m + 1} + \cdots + c_{n}\rho _{n} = \overline{0} \end{aligned}$$
(5.6)

where \(c_{m + 1},\ldots ,c_{n}\) are constants which depend on \(u_{m + 1},\ldots ,u_{n}\). Since, by our assumption, all the vectors \(\rho _{1},\ldots ,\rho _{m}\) are different from zero, they form a sub matrix of the Vandermonde matrix which implies that they are linearly independent. Hence from Eq. (5.5) we have

$$\begin{aligned} u_{1} - u_{\sigma ^{-1}(1)} = 0,\ldots , u_{m} - u_{\sigma ^{-1}(m)} = 0. \end{aligned}$$
(5.7)

We will now show that the vector \(\widetilde{u} = (u_{1},\ldots ,u_{m},0,\ldots ,0)\) is also in the kernel of M. Indeed, from the definition of M we have to prove that

$$\begin{aligned}&\left( \rho _{1} - \rho _{\sigma (1)}\right) u_{1} + \cdots + \left( \rho _{m} - \rho _{\sigma (m)}\right) u_{m}\\&\quad = \rho _{1}u_{1} + \cdots + \rho _{m}u_{m} - \underset{(*)}{\underbrace{\rho _{\sigma (1)}u_{1} - \cdots - \rho _{\sigma (m)}u_{m}}} = \overline{0}. \end{aligned}$$

Now by condition (ii) it follows that \(\sigma \left( \{1,\ldots ,m\}\right) = \{1,\ldots ,m\}\) and thus we also have \(\sigma ^{-1}\left( \{1,\ldots ,m\}\right) = \{1,\ldots ,m\}\). Thus the set \(\{1,\ldots ,m\}\) is a partition of \(\{\sigma ^{-1}(1),\ldots ,\sigma ^{-1}(m)\}\). Using this observation on \((*)\) we have

$$\begin{aligned}&\rho _{1}u_{1} + \cdots + \rho _{m}u_{m} - \rho _{\sigma (1)}u_{1} - \cdots - \rho _{\sigma (m)}u_{m}\\&\quad = \rho _{1}u_{1} + \cdots + \rho _{m}u_{m} - \rho _{\sigma (\sigma ^{-1}(1))}u_{\sigma ^{-1}(1)} - \cdots - \rho _{\sigma (\sigma ^{-1}(m))}u_{\sigma ^{-1}(m)}\\&\quad = \rho _{1}u_{1} + \cdots + \rho _{m}u_{m} - \rho _{1}u_{\sigma ^{-1}(1)} - \cdots - \rho _{m}u_{\sigma ^{-1}(m)}\\&\quad = \left( u_{1} - u_{\sigma ^{-1}(1)}\right) \rho _{1} + \cdots + \left( u_{m} - u_{\sigma ^{-1}(m)}\right) \rho _{m} \end{aligned}$$

and the last expression is equal to zero by (5.6).

Since \({\widetilde{u}}\) is in the kernel of M it follows that the vector \((u_{1},\ldots ,u_{m})\) is in the kernel of the top left \(m\times m\) sub matrix of M. That is

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c} \lambda _{1} - \lambda _{\sigma (1)} &{} \ldots &{} \lambda _{m} - \lambda _{\sigma (m)}\\ \ldots \ldots \ldots \ldots \ldots \\ \lambda _{1}^{m} - \lambda _{\sigma (1)}^{m} &{} \ldots &{} \lambda _{m}^{m} - \lambda _{\sigma (m)}^{m} \end{array} \right) \left( \begin{array}{c} u_{1}\\ \ldots \\ u_{m} \end{array} \right) = \left( \begin{array}{c} 0\\ \ldots \\ 0 \end{array} \right) . \end{aligned}$$
(5.8)

Now we can use the induction hypothesis on Eq. (5.7). Indeed, since \(\sigma \left( \{1,\ldots ,m\}\right) = \{1,\ldots ,m\}\) and \(\lambda _{i}\ne \lambda _{j}\) for \(1\le i,j\le m, i\ne j\), all of the conditions for the sub matrix in the left hand side of Eq. (5.7) are satisfied. Hence we conclude that there are two different indices, i and j with \(1\le i,j\le m\), such that \(u_{i} = u_{j}\). This in particular implies that the vector u has two equal components with different indices. Thus the case \(k\ge 2\) is also proved. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Salman, Y. Recovering finite parametric distributions and functions using the spherical mean transform. Anal.Math.Phys. 8, 437–463 (2018). https://doi.org/10.1007/s13324-017-0171-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13324-017-0171-y

Navigation