Infinite-Dimensional \(\ell ^1\) Minimization and Function Approximation from Pointwise Data
- 332 Downloads
We consider the problem of approximating a smooth function from finitely many pointwise samples using \(\ell ^1\) minimization techniques. In the first part of this paper, we introduce an infinite-dimensional approach to this problem. Three advantages of this approach are as follows. First, it provides interpolatory approximations in the absence of noise. Second, it does not require a priori bounds on the expansion tail in order to be implemented. In particular, the truncation strategy we introduce as part of this framework is independent of the function being approximated, provided the function has sufficient regularity. Third, it allows one to explain the key role weights play in the minimization, namely, that of regularizing the problem and removing aliasing phenomena. In the second part of this paper, we present a worst-case error analysis for this approach. We provide a general recipe for analyzing this technique for arbitrary deterministic sets of points. Finally, we use this tool to show that weighted \(\ell ^1\) minimization with Jacobi polynomials leads to an optimal method for approximating smooth, one-dimensional functions from scattered data.
KeywordsFunction approximation \(\ell ^1\) minimization Scattered data Polynomials
Mathematics Subject Classification41A25 41A05 41A10
The work was supported by the Alfred P. Sloan Foundation and the Natural Sciences and Engineering Research Council of Canada through grant 611675. A preliminary version of this work was presented during the Research Cluster on “Computational Challenges in Sparse and Redundant Representations” at ICERM in November 2014. The author would like to thank the participants for the useful feedback received during the program. He would also like to thank Alireza Doostan, Anders Hansen, Rodrigo Platte, Aditya Viswanathan, Rachel Ward, and Dongbin Xiu.
- 1.Adcock, B.: Infinite-dimensional compressed sensing and function interpolation. arXiv:1509.06073 (2015)
- 8.Adcock, B., Platte, R., Shadrin, A.: Optimal sampling rates for approximating analytic functions from pointwise samples. arXiv:1610.04769 (2016)
- 13.Chkifa, A., Dexter, N., Tran, H., Webster, C.: Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Technical Report ORNL/TM-2015/497, Oak Ridge National Laboratory (2015)Google Scholar
- 17.Demanet, L., Townsend, A.: Stable extrapolation of analytic functions. arXiv:1605.09601 (2016)
- 20.Hampton, J., Doostan, A.: Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies. arXiv:1408.4157 (2014)
- 23.Migliorati, G., Nobile, F.: Analysis of discrete least squares on multivariate polynomial spaces with evaluations in low-discrepancy point sets analysis of discrete least squares on multivariate polynomial spaces with evaluations in low-discrepancy point sets. Preprint (2014)Google Scholar
- 27.Poon, C.: A stable and consistent approach to generalized sampling. Preprint (2013)Google Scholar
- 28.Rauhut, H., Ward, R.: Sparse recovery for spherical harmonic expansions. In: Proceedings of the 9th International Conference on Sampling Theory and Applications (2011)Google Scholar
- 30.Rauhut, H., Ward, R.: Interpolation via weighted \(\ell _1\) minimization. arXiv:1308.0759 (2013)