Abstract
We show that the sparse polynomial interpolation problem reduces to a discrete super-resolution problem on the n-dimensional torus. Therefore, the semidefinite programming approach initiated by Candès and Fernandez-Granda (Commun. Pure Appl. Math. 67(6) 906–956, 2014) in the univariate case can be applied. We extend their result to the multivariate case, i.e., we show that exact recovery is guaranteed provided that a geometric spacing condition on the supports holds and evaluations are sufficiently many (but not many). It also turns out that the sparse recovery LP-formulation of ℓ1-norm minimization is also guaranteed to provide exact recovery provided that the evaluations are made in a certain manner and even though the restricted isometry property for exact recovery is not satisfied. (A naive sparse recovery LP approach does not offer such a guarantee.) Finally, we also describe the algebraic Prony method for sparse interpolation, which also recovers the exact decomposition but from less point evaluations and with no geometric spacing condition. We provide two sets of numerical experiments, one in which the super-resolution technique and Prony’s method seem to cope equally well with noise, and another in which the super-resolution technique seems to cope with noise better than Prony’s method, at the cost of an extra computational burden (i.e., a semidefinite optimization).
Similar content being viewed by others
References
Azaïs, J.-M., de Castro, Y., Gamboa, F.: Spike detection from inaccurate samplings. Appl. Comput. Harmon. Anal. 38(2), 177–195 (2015)
Ben-Or, M., Tiwari, P.: A deterministic algorithm for sparse multivariate polynomial interpolation. In: Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, pp. 301–309. ACM (1988)
Berlekamp, E.R.: Nonbinary BCH decoding. IEEE Trans. Inf. Theory 14(2), 242–242 (1968)
Beylkin, G., Monzón, L.: On approximation of functions by exponential sums. Appl. Comput. Harmon. Anal. 19(1), 17–48 (2005)
Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C.R. Acad. Sci. Paris Ser. I 346, 589–592 (2008)
Candès, E.J., Carlos, F.-G.: Super-resolution from noisy data. J. Fourier Anal. Appl. 19(6), 1229–1254 (2013)
Candès, E.J., Fernandez-Granda, C.: Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 67(6), 906–956 (2014)
Candes, E.J., Plan, Y.: A probabilistic and RIPless theory of compressed sensing. IEEE Trans. Inf. Theory 57(11), 7235–7254 (2011)
Candes, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)
Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Inform. Theory 51(12), 4203–4215 (2014). 2005
Candès, E.J., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)
Curto, R.E., Fialkow, L.A.: Truncated K-moment problems in several variables. J. Operator Theory 54, 189–226 (2005)
de Baron de Prony, G.R.: Essai expérimental et analytique: Sur les lois de la dilatabilité de fluides élastique et sur celles de la force expansive de la vapeur de l’alcool, à différentes températures. J. Ecole Polyt. 1, 24–76 (1795)
Cuyt, A., Lee, W.-S.: Sparse interpolation and rational approximation. In: Hardin, D., Lubinsky, D., Simanek, B., et al. (eds.) Contemporary Mathematics, vol. 661, pp 229–242. American Mathematical Society, Providence (2016)
De Castro, Y., Gamboa, F., Henrion, D., Lasserre, J.-B.: Exact solutions to super-resolution on semi-algebraic domains in higher dimensions. IEEE Trans. Inform. Theory 63, 621–630 (2017)
Dantzig, G.B., Thapa, M.N.: Linear programming 1: Introduction. Springer-verlag New York Inc., New York (1997)
Duval, V., Peyré, G.: Exact support recovery for sparse spikes deconvolution. Found. Comput. Math. 15(5), 1315–1355 (2015)
Fan, Y.Y., Kamath, C.: A comparison of compressed sensing and sparse recovery algorithms applied to simulation data. Stat. Optim. Inform. Computing 4, 194–213 (2016)
Filbir, F., Schröder, K.: Exact recovery of discrete measures from Wigner D-moments. arXiv:1606.05306 (2016)
Giesbrecht, M., Labahn, G., Lee, W.-S.: Symbolic–numeric sparse interpolation of multivariate polynomials. J. Symb. Comput. 44(8), 943–959 (2009)
Golub, G., Pereyra, V.: Separable nonlinear least squares: The variable projection method and its applications. Inverse Prob. 19(2), R1–R26 (2003)
Grigoriev, D.Y., Karpinski, M., Singer, M.F.: Fast parallel algorithms for sparse multivariate polynomial interpolation over finite fields. SIAM J. Comput. 19 (6), 1059–1063 (1990)
Harmouch, J., Khalil, H., Mourrain, B.: Structured Low Rank Decomposition of Multivariate Hankel Matrices. Linear Algebra and its Applications (2017)
Hassanieh, Hitham, Indyk, Piotr, Katabi, Dina, Price, Eric: Nearly optimal sparse Fourier transform. In: Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, STOC ’12, pp. 563–578. ACM Press (2012)
Josz, C., Molzahn, D.K.: Large Scale Complex Polynomial Optimization. arXiv:1508.02068
Kaltofen, E., Lakshman, Y.N.: Sparse multivariate polynomial interpolation algorithms. In: Proceedings of the International Symposium ISSAC’88 on Symbolic and Algebraic Computation, ISSAC ’88, pp 467–474. Springer, London (1989)
Kaltofen, E.L., Lee, W.-S., Yang, Z.: Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation, pp 130–136. ACM Press, New York (2011)
Krasovskii, N.N.: Theory of Motion Control. Moscow, Nauka (1968). (in Russian)
Kunis, S., Peter, T., Römer, T., von der Ohe, U.: A multivariate generalization of Prony’s method. Linear Algebra Appl. 490, 31–47 (2016)
Laurent, M., Mourrain, B.: A generalized flat extension theorem for moment matrices. Arch. Math. 93(1), 87–98 (2009)
Massey, J.: Shift-register synthesis and BCH decoding. IEEE Trans. Inf. Theory 15(1), 122–127 (1969)
Mourrain, B.: Polynomial-exponential decomposition from moments. Found. Comput. Math. 18(6), 1435–1492 (2018). https://doi.org/10.1007/s10208-017-9372-x
Neustadt, L.W.: Optimization, a moment problem, and nonlinear programming. Journal of the Society for Industrial and Applied Mathematics Series A Control 2(1), 33–53 (1964)
Nie, J.: Optimality conditions and finite convergence of Lasserre?s hierarchy. Math. Program. Ser Optimality A 146(1-2), 97–121 (2014)
Pereyra, V., Scherer, G., et al.: Exponential Data Fitting and Its Applications. Bentham Science Publishers, Sharjah (2012)
Poon, C., Peyré, G.: Multi-dimensional sparse super-resolution. SIAM J. Math. Anal. 51(1), 1–44 (2019)
Potts, D., Tasche, M.: Nonlinear approximation by sums of nonincreasing exponentials. Appl. Anal. 90(3-4), 609–626 (2011)
Richard, R., Kailath, T.: ESPRIT-Estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process. 37(7), 984–995 (1989)
Rudin, W.: Real and Complex Analysis. McGraw-Hill Education, New York (1986)
Sauer, T.: Prony’s method in several variables. Numer. Math. 136(2), 411–438 (2017). https://doi.org/10.1007/s00211-016-0844-8
Stoica, P., Moses, R.L.: Spectral Analysis of Signals. Pearson/prentice Hall, Upper Saddle River (2005)
Lee Swindlehurst, A., Kailath, T.: A performance analysis of subspace-based methods in the presence of model errors. I. The MUSIC algorithm. IEEE Trans. Signal Process. 40(7), 1758–1774 (1992)
Zippel, R.: Probabilistic algorithms for sparse polynomials. In: Proceedings of the International Symposiumon on Symbolic and Algebraic Computation, EUROSAM ’79, pp 216–226. Springer, London (1979)
Zippel, R.: Interpolating polynomials from their values. J. Symb. Comput. 9 (3), 375–403 (1990)
Iohvidov, I.S.: Hankel and Toeplitz Matrices and Forms: Algebraic Theory. Birkhäuser Verlag, Boston (1982)
Comer, M.T., Kaltofen, E.L., Pernet, C.: Sparse polynomial interpolation and Berlekamp/Massey algorithms that correct outlier errors in input values. In: Proceedings of the Sparse Polynomial International Symposium on Symbolic and Algebraic Computation, ISSAC ’12, pp. 138–145. Grenoble, France. 2012 (2012)
Acknowledgments
The research of the second author was funded by the European Research Council (ERC) under the European’s Union Horizon 2020 research and innovation program (grant agreement 666981 TAMING).
Funding
The work of the first two authors was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement 666981 TAMING).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by: Jon Wilkening
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Lemma 1
Ifz(1),…, z(d)are distinct points of\(\mathbb {C}^{n}\),thenvd− 1(z(1)),…, vd− 1(z(d)) are linearly independent vectors, wherevd(z) := (zα)|α≤d.
Proof
Consider some complex numbers c1,…, cd such as the following:
Given 1 ≤ l ≤ d, define the Lagrange interpolation polynomial as follows:
where i(k) ∈{1,…, n} is an index such that \(z^{(k)}_{i(k)} \neq z^{(l)}_{i(k)}\). It satisfies L(l)(z(k)) = 1 if k = l and L(l)(z(k)) = 0 if k≠l. The degree of \(L^{(l)}(z) =: {\sum }_{\alpha } L^{(l)}_{\alpha } z^{\alpha }\) is equal to d − 1. Thus, we may multiply the equation in (6.1) by \(L^{(l)}_{\alpha }\) to obtain the following:
Summing over all |α|≤ d − 1 yields \({\sum }_{k = 1}^{d} c_{k} ~ L^{(l)}(z^{(k)}) = c_{l} = 0\). □
Lemma 2
If \(u_{1},\hdots ,u_{d} \in \mathbb {C}^{n}\) are linearly independent, and \(c_{1},\hdots ,c_{d} \in \mathbb {C}\setminus \{0\}\) , then, \(\mathcal {R}({\sum }_{i = 1}^{d} c_{i} u_{i} {u_{i}^{T}}) = \mathcal {R}({\sum }_{i = 1}^{d} c_{i} u_{i} u_{i}^{*}) = \text {span} \{u_{1},\hdots ,u_{d}\}\) where \(\mathcal {R}\) denotes the range.
Proof
If \(z \in \mathbb {C}^{n}\), then, \(({\sum }_{i = 1}^{d} c_{i} u_{i} {u_{i}^{T}})z = {\sum }_{i = 1}^{d} (c_{i} {u_{i}^{T}} z) u_{i} \in \text {span} \{u_{1},\hdots ,u_{d}\}\) and \(({\sum }_{i = 1}^{d} c_{i} u_{i} u_{i}^{*})z = {\sum }_{i = 1}^{d} (c_{i} u_{i}^{*} z) u_{i} \in \text {span} \{u_{1},\hdots ,u_{d}\}\). Conversely, an element of the span \({\sum }_{i = 1}^{d} \lambda _{i} u_{i}\) with \(\lambda _{1},\hdots ,\lambda _{n} \in \mathbb {C}\) belongs to the range of \({\sum }_{i = 1}^{d} c_{i} u_{i} {u_{i}^{T}}\) if there exists \(z\in \mathbb {C}^{n}\) such as the following:
which is equivalent to each of the next three lines:
Since \((c_{1} u_{1} \hdots c_{d} u_{d} ) \in \mathbb {C}^{n \times d}\) has rank d, its transpose has rank d. Thus, there exists a desired \(z \in \mathbb {C}^{n}\). Likewise, \({\sum }_{i = 1}^{d} \lambda _{i} u_{i}\) belongs to the range of \({\sum }_{i = 1}^{d} c_{i} u_{i} u_{i}^{*}\) if there exists \(z\in \mathbb {C}^{n}\) such as the following:
Since \((c_{1} u_{1} \hdots c_{d} u_{d} ) \in \mathbb {C}^{n \times p}\) has rank d, its conjugate transpose has rank d. Thus, there exists a desired \(z \in \mathbb {C}^{n}\). □
Rights and permissions
About this article
Cite this article
Josz, C., Lasserre, J.B. & Mourrain, B. Sparse polynomial interpolation: sparse recovery, super-resolution, or Prony?. Adv Comput Math 45, 1401–1437 (2019). https://doi.org/10.1007/s10444-019-09672-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10444-019-09672-2