Skip to main content
Log in

Infinite-Dimensional \(\ell ^1\) Minimization and Function Approximation from Pointwise Data

  • Published:
Constructive Approximation Aims and scope

Abstract

We consider the problem of approximating a smooth function from finitely many pointwise samples using \(\ell ^1\) minimization techniques. In the first part of this paper, we introduce an infinite-dimensional approach to this problem. Three advantages of this approach are as follows. First, it provides interpolatory approximations in the absence of noise. Second, it does not require a priori bounds on the expansion tail in order to be implemented. In particular, the truncation strategy we introduce as part of this framework is independent of the function being approximated, provided the function has sufficient regularity. Third, it allows one to explain the key role weights play in the minimization, namely, that of regularizing the problem and removing aliasing phenomena. In the second part of this paper, we present a worst-case error analysis for this approach. We provide a general recipe for analyzing this technique for arbitrary deterministic sets of points. Finally, we use this tool to show that weighted \(\ell ^1\) minimization with Jacobi polynomials leads to an optimal method for approximating smooth, one-dimensional functions from scattered data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Adcock, B.: Infinite-dimensional compressed sensing and function interpolation. arXiv:1509.06073 (2015)

  2. Adcock, B., Hansen, A.C.: A generalized sampling theorem for stable reconstructions in arbitrary bases. J. Fourier Anal. Appl. 18(4), 685–716 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  3. Adcock, B., Hansen, A.C.: Generalized sampling and infinite-dimensional compressed sensing. Found. Comput. Math. 16(5), 1263–1323 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Adcock, B., Hansen, A.C., Poon, C.: Beyond consistent reconstructions: optimality and sharp bounds for generalized sampling, and application to the uniform resampling problem. SIAM J. Math. Anal. 45(5), 3114–3131 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Adcock, B., Hansen, A.C., Roman, B., Teschke, G.: Generalized sampling: stable reconstructions, inverse problems and compressed sensing over the continuum. Adv. Imaging Electron Phys. 182, 187–279 (2014)

    Article  Google Scholar 

  6. Adcock, B., Huybrechs, D., Martín-Vaquero, J.: On the numerical stability of Fourier extensions. Found. Comput. Math. 14(4), 635–687 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Adcock, B., Platte, R.: A mapped polynomial method for high-accuracy approximations on arbitrary grids. SIAM J. Numer. Anal. 54(4), 2256–2281 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  8. Adcock, B., Platte, R., Shadrin, A.: Optimal sampling rates for approximating analytic functions from pointwise samples. arXiv:1610.04769 (2016)

  9. Boyd, J.P., Ong, J.R.: Exponentially-convergent strategies for defeating the Runge phenomenon for the approximation of non-periodic functions. I. Single-interval schemes. Commun. Comput. Phys. 5(2–4), 484–497 (2009)

    MathSciNet  Google Scholar 

  10. Canuto, C., Hussaini, M.Y., Quarteroni, A., Zang, T.A.: Spectral Methods: Fundamentals in Single Domains. Springer, New York (2006)

    MATH  Google Scholar 

  11. Chandrasekaran, S., Jayaraman, K.R., Mhaskar, H.: Minimum Sobolev norm interpolation with trigonometric polynomials on the torus. J. Comput. Phys. 249, 96–112 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chandrasekaran, S., Mhaskar, H.: A minimum Sobolev norm technique for the numerical discretization of PDEs. J. Comput. Phys. 299, 649–666 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chkifa, A., Dexter, N., Tran, H., Webster, C.: Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Technical Report ORNL/TM-2015/497, Oak Ridge National Laboratory (2015)

  14. Cohen, A., Davenport, M.A., Leviatan, D.: On the stability and accuracy of least squares approximations. Found. Comput. Math. 13, 819–834 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  15. Cohen, A., DeVore, R.A., Schwab, C.: Convergence rates of best \(N\)-term Galerkin approximations for a class of elliptic sPDEs. Found. Comput. Math. 10, 615–646 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. Cohen, A., DeVore, R.A., Schwab, C.: Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDE’s. Anal. Appl. 9, 11–47 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Demanet, L., Townsend, A.: Stable extrapolation of analytic functions. arXiv:1605.09601 (2016)

  18. Doostan, A., Owhadi, H.: A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. 230(8), 3015–3034 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  19. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Birkhauser, Basel (2013)

    Book  MATH  Google Scholar 

  20. Hampton, J., Doostan, A.: Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies. arXiv:1408.4157 (2014)

  21. Le Maître, O.P., Knio, O.M.: Spectral Methods for Uncertainty Quantification. Springer, New York (2010)

    Book  MATH  Google Scholar 

  22. Mathelin, L., Gallivan, K.A.: A compressed sensing approach for partial differential equations with random input data. Commun. Comput. Phys. 12(4), 919–954 (2012)

    Article  MathSciNet  Google Scholar 

  23. Migliorati, G., Nobile, F.: Analysis of discrete least squares on multivariate polynomial spaces with evaluations in low-discrepancy point sets analysis of discrete least squares on multivariate polynomial spaces with evaluations in low-discrepancy point sets. Preprint (2014)

  24. Migliorati, G., Nobile, F., von Schwerin, E., Tempone, R.: Analysis of the discrete \(L^2\) projection on polynomial spaces with random evaluations. Found. Comput. Math. 14, 419–456 (2014)

    MathSciNet  MATH  Google Scholar 

  25. Peng, J., Hampton, J., Doostan, A.: A weighted \(\ell _1\)-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267, 92–111 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Platte, R., Trefethen, L.N., Kuijlaars, A.: Impossibility of fast stable approximation of analytic functions from equispaced samples. SIAM Rev. 53(2), 308–318 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  27. Poon, C.: A stable and consistent approach to generalized sampling. Preprint (2013)

  28. Rauhut, H., Ward, R.: Sparse recovery for spherical harmonic expansions. In: Proceedings of the 9th International Conference on Sampling Theory and Applications (2011)

  29. Rauhut, H., Ward, R.: Sparse Legendre expansions via l1-minimization. J. Approx. Theory 164(5), 517–533 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  30. Rauhut, H., Ward, R.: Interpolation via weighted \(\ell _1\) minimization. arXiv:1308.0759 (2013)

  31. Szegö, G.: Orthogonal Polynomials. American Mathematical Society, Providence, RI (1975)

    MATH  Google Scholar 

  32. Wendland, H.: Scattered Data Approximation. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  33. Xiu, D., Karniadakis, G.E.: The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24(2), 619–644 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  34. Yan, L., Guo, L., Xiu, D.: Stochastic collocation algorithms using \(\ell _1\)-minimization. Int. J. Uncertain. Quantif. 2(3), 279–293 (2012)

    Article  MathSciNet  Google Scholar 

  35. Yang, X., Karniadakis, G.E.: Reweighted \(\ell _1\) minimization method for stochastic elliptic differential equations. J. Comput. Phys. 248, 87–108 (2013)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The work was supported by the Alfred P. Sloan Foundation and the Natural Sciences and Engineering Research Council of Canada through grant 611675. A preliminary version of this work was presented during the Research Cluster on “Computational Challenges in Sparse and Redundant Representations” at ICERM in November 2014. The author would like to thank the participants for the useful feedback received during the program. He would also like to thank Alireza Doostan, Anders Hansen, Rodrigo Platte, Aditya Viswanathan, Rachel Ward, and Dongbin Xiu.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ben Adcock.

Additional information

Communicated by Karlheinz Groechenig.

Appendix: Jacobi Polynomials

Appendix: Jacobi Polynomials

Given \(\alpha ,\beta > -1\), let \(P^{(\alpha ,\beta )}_j\) be the Jacobi polynomial of degree j. These polynomials are orthogonal on \(D=(-1,1)\) with respect to \(\nu ^{(\alpha ,\beta )}(t) = (1-t)^\alpha (1+t)^{\beta }\), with

$$\begin{aligned} \langle P^{(\alpha ,\beta )}_j, P^{(\alpha ,\beta )}_k \rangle _{L^2_{\nu ^{(\alpha ,\beta )}}} = \delta _{j,k} \kappa ^{(\alpha ,\beta )}_j, \end{aligned}$$

where

$$\begin{aligned} \kappa ^{(\alpha ,\beta )}_j = \frac{2^{\alpha +\beta +1}}{2j+\alpha +\beta +1} \frac{\Gamma (j+\alpha +1) \Gamma (j+\beta +1)}{j! \Gamma (j+\alpha +\beta +1)}, \end{aligned}$$
(A.1)

and have the normalization

$$\begin{aligned} P^{(\alpha ,\beta )}_{j}(1) = \left( \begin{array}{c} j + \alpha \\ j \end{array} \right) . \end{aligned}$$

The corresponding orthonormal polynomials are defined by \(\phi _j(t) = \left( \kappa ^{(\alpha ,\beta )}_{j-1} \right) ^{-1/2} P^{(\alpha ,\beta )}_{j-1}(t)\), \(j \in \mathbb {N}\). Note that

$$\begin{aligned} \kappa ^{(\alpha ,\beta )}_{j} \sim 2^{\alpha +\beta } j^{-1} ,\quad j \rightarrow \infty , \end{aligned}$$
(A.2)

and also that

$$\begin{aligned} P^{(\alpha ,\beta )}_{j}(1) \sim \frac{j^{\alpha }}{\Gamma (\alpha +1)}. \end{aligned}$$

The polynomials \(P^{(\alpha ,\beta )}_j\) satisfy the differential equation

$$\begin{aligned} -\left( \nu ^{(\alpha +1,\beta +1)} \left( P^{(\alpha ,\beta )}_{j} \right) ' \right) ' + \lambda ^{(\alpha ,\beta )}_j \nu ^{(\alpha ,\beta )} P^{(\alpha ,\beta )}_j = 0, \end{aligned}$$
(A.3)

where \(\lambda ^{(\alpha ,\beta )}_{j} = j(j+\alpha +\beta +1)\). In particular, the derivatives \((P^{(\alpha ,\beta )}_{j})'\) are orthogonal with respect to \(\nu ^{(\alpha +1,\beta +1)}\) and satisfy

$$\begin{aligned} \left( P^{(\alpha ,\beta )}_{j} \right) ' = \sqrt{\frac{\lambda ^{(\alpha ,\beta )}_j \kappa ^{(\alpha ,\beta )}_{j}}{\kappa ^{(\alpha +1,\beta +1)}_{j-1}} } P^{(\alpha +1,\beta +1)}_{j-1}. \end{aligned}$$
(A.4)

Lemma A.1

Let \(\alpha ,\beta > -1\). Then \(\Vert p' \Vert _{L^2_{\nu ^{(\alpha +1,\beta +1)}}} \le \lambda ^{(\alpha ,\beta )}_M \Vert p \Vert _{L^2_{\nu ^{(\alpha ,\beta )}}}\), \(\forall p \in \mathbb {P}_M\).

Proof

Let \(p \in \mathbb {P}_M\) be arbitrary, and observe that

$$\begin{aligned} p (t)= \sum ^{M}_{j=0} \frac{x_j}{\kappa ^{(\alpha ,\beta )}_j} P^{(\alpha ,\beta )}_j(t),\qquad x_j = \langle p, P^{(\alpha ,\beta )}_j \rangle _{L^2_{\nu ^{(\alpha ,\beta )}}}. \end{aligned}$$

Note that

$$\begin{aligned} \Vert p \Vert ^2_{L^2_{\nu ^{(\alpha ,\beta )}}} = \sum ^{M}_{j=0} \frac{|x_j |^2}{\kappa ^{(\alpha ,\beta )}_j}. \end{aligned}$$
(A.5)

Similarly,

$$\begin{aligned} p'(t) = \sum ^{M-1}_{j=0} \frac{y_j}{\kappa ^{(\alpha +1,\beta +1)}_{j}} P^{(\alpha +1,\beta +1)}_{j}(t),\qquad y_j = \langle p', P^{(\alpha +1,\beta +1)}_j \rangle _{L^2_{\nu ^{(\alpha +1,\beta +1)}}}, \end{aligned}$$

and

$$\begin{aligned} \Vert p' \Vert ^2_{L^2_{\nu ^{(\alpha +1,\beta +1)}}} = \sum ^{M-1}_{j=0} \frac{|y_j |^2}{\kappa ^{(\alpha +1,\beta +1)}_j}. \end{aligned}$$
(A.6)

Consider \(x_j\). By the differential equation (A.3) and the fact that \(\nu ^{(\alpha +1,\beta +1)}(\pm 1 ) = 0\), we have

$$\begin{aligned} x_j = \int ^{1}_{-1} p(t) P^{(\alpha ,\beta )}_j(t) \nu ^{(\alpha ,\beta )}(t) \,\mathrm {d}t = \frac{1}{\lambda ^{(\alpha ,\beta )}_j} \int ^{1}_{-1} p'(t) \left( P^{(\alpha ,\beta )}_{j}(t) \right) ' \nu ^{(\alpha +1,\beta +1)}(t) \,\mathrm {d}t. \end{aligned}$$

Hence, by (A.4),

$$\begin{aligned} x_j&= \sqrt{\frac{\kappa ^{(\alpha ,\beta )}_{j}}{\lambda ^{(\alpha ,\beta )}_j \kappa ^{(\alpha +1,\beta +1)}_{j-1}} } \langle p', P^{(\alpha +1,\beta +1)}_{j-1} \rangle _{L^2_{\nu ^{(\alpha +1,\beta +1)}}} = \sqrt{\frac{\kappa ^{(\alpha ,\beta )}_{j}}{\lambda ^{(\alpha ,\beta )}_j \kappa ^{(\alpha +1,\beta +1)}_{j-1}} } y_{j-1}. \end{aligned}$$

Using (A.5) and (A.6), we now get that

$$\begin{aligned} \Vert p \Vert ^2_{L^2_{\nu ^{(\alpha ,\beta )}}} \ge \sum ^{M}_{j=1} \frac{|y_{j-1}|^2}{ \lambda ^{(\alpha ,\beta )}_j \kappa ^{(\alpha +1,\beta +1)}_{j-1}} \ge \frac{1}{\lambda ^{(\alpha ,\beta )}_{M} } \Vert p' \Vert ^2_{L^2_{\nu ^{(\alpha +1,\beta +1)}}}, \end{aligned}$$

as required. \(\square \)

We also require several results concerning the asymptotic behavior of Jacobi polynomials. The first is as follows (see [31, Thm. 7.32.1]):

$$\begin{aligned} \Vert P^{(\alpha ,\beta )}_j \Vert _{L^\infty } = \mathcal {O}\left( j^{q} \right) ,\quad n \rightarrow \infty ,\qquad q = \max \{ \alpha ,\beta , -1/2 \} . \end{aligned}$$

Hence, using (A.2), we find that the normalized functions \(\phi _j\) defined by (2.6) satisfy

$$\begin{aligned} \Vert \phi _j \Vert _{L^\infty } = \mathcal {O}\left( j^{q+1/2} \right) ,\quad j \rightarrow \infty , \end{aligned}$$
(A.7)

which gives (2.7). We also note the following local estimates for Jacobi polynomials. If \(k=0,1,2,\ldots \) and \(c >0\) is a fixed constant, then

$$\begin{aligned} \left| \frac{\,\mathrm {d}^kP^{(\alpha ,\beta )}_j(t)}{\,\mathrm {d}t^k} \Bigg |_{t = \cos \theta } \right| = \left\{ \begin{array}{ll} \theta ^{-\alpha -k-1/2} \mathcal {O}\left( j^{k-1/2} \right) , &{} c j^{-1} \le \theta \le \pi /2, \\ \mathcal {O}\left( j^{2k+\alpha } \right) , &{} 0 \le \theta \le c j^{-1}, \end{array} \right. \end{aligned}$$
(A.8)

as \(j \rightarrow \infty \). See [31, Thm. 7.32.4]. This estimate bounds the Jacobi polynomial and its derivatives for \(0 \le t \le 1\). For negative t, we may use the relation

$$\begin{aligned} P^{(\alpha ,\beta )}_j(-t) = (-1)^j P^{(\beta ,\alpha )}_j(t). \end{aligned}$$
(A.9)

Hence behavior of \(P^{(\alpha ,\beta )}_j(t)\) and its derivatives for \(t<0\) is given by (A.8) with \(\alpha \) replaced by \(\beta \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adcock, B. Infinite-Dimensional \(\ell ^1\) Minimization and Function Approximation from Pointwise Data. Constr Approx 45, 345–390 (2017). https://doi.org/10.1007/s00365-017-9369-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00365-017-9369-3

Keywords

Mathematics Subject Classification

Navigation