Skip to main content
Log in

Correcting for unknown errors in sparse high-dimensional function approximation

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

We consider sparsity-based techniques for the approximation of high-dimensional functions from random pointwise evaluations. To date, almost all the works published in this field contain some a priori assumptions about the error corrupting the samples that are hard to verify in practice. In this paper, we instead focus on the scenario where the error is unknown. We study the performance of four sparsity-promoting optimization problems: weighted quadratically-constrained basis pursuit, weighted LASSO, weighted square-root LASSO, and weighted LAD-LASSO. From the theoretical perspective, we prove uniform recovery guarantees for these decoders, deriving recipes for the optimal choice of the respective tuning parameters. On the numerical side, we compare them in the pure function approximation case and in applications to uncertainty quantification of ODEs and PDEs with random inputs. Our main conclusion is that the lesser-known square-root LASSO is better suited for high-dimensional approximation than the other procedures in the case of bounded noise, since it avoids (both theoretically and numerically) the need for parameter tuning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. The factor \(1/\sqrt{m}\) is needed in order to guarantee the restricted isometry property for the design matrix \({\varvec{A}}\). See Sect. 5.

  2. In practice, for WLAD-LASSO we use \(\lambda = 1.01\) instead of \(\lambda = 1\) since the choice \(\lambda = 1\) leads to the presence of spurious outliers in the box plot. We think that this behavior is due to CVX and not to the decoder itself.

  3. Notice that the outliers are sometimes aligned (e.g., in the tail of the blue curve in Fig. 1 bottom right). This is due to the structure of the proposed numerical experiment: for each randomized choice of samples, all the parameters are tested using the same samples.

  4. We have performed the same experiment for quantities of interest different from (35), such as the integral of \(u_{{\varvec{t}}}\) over the regions \(\Omega _i\), pointwise evaluations of \(u_{\varvec{t}}\), or the integral of \(u_{{\varvec{t}}}^2\) over \(\Omega _i\) or \(\Omega _F\). In all these cases, we do not observe the strict global minima in the parameter-vs-error plot. These experiment are not reported here for the sake of brevity.

  5. The same phenomenon is observed for WLASSO and WLAD-LASSO, but the plots are not shown here for the sake of brevity.

  6. Notice that since \({\varvec{M}}({\varvec{z}}-\hat{{\varvec{z}}}) = {\varvec{0}}\) the constant \(\tau \) does not appear in (72). Indeed, we are just using a 2-level weighted version of the so-called stable null space property (see [31, §4.2]).

References

  1. Adcock, B.: Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math. 18(3), 661–701 (2018)

  2. Adcock, B.: Infinite-dimensional \(\ell ^1\) minimization and function approximation from pointwise data. Constr. Approx. 45(3), 345–390 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  3. Adcock, B., Bao, A., Narayan, A., Author, U.: Compressed sensing with sparse corruptions: fault-tolerant sparse collocation approximations (2017). arXiv:1703.00135

  4. Adcock, B., Brugiapaglia, S., Webster, C.G.: Compressed sensing approaches for polynomial approximation of high-dimensional functions (2017). arXiv:1703.06987

  5. Adcock, B., Hansen, A.C., Poon, C., Roman, B.: Breaking the coherence barrier: a new theory for compressed sensing. Forum Math. Sigma 5, E4. (2017). https://doi.org/10.1017/fms.2016.32

  6. Arlot, S., Celisse, A.: A survey of cross-validation procedures for model selection. Stat. Surv. 4, 40–79 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Arslan, O.: Weighted LAD-LASSO method for robust parameter estimation and variable selection in regression. Comput. Stat. Data Anal. 56(6), 1952–1965 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Babu, P., Stoica, P.: Connection between spice and square-root lasso for sparse parameter estimation. Signal Process. 95, 10–14 (2014)

    Article  Google Scholar 

  9. Bäck, J., Nobile, F., Tamellini, L., Tempone, R.: Stochastic spectral galerkin and collocation methods for pdes with random coefficients: A numerical comparison. In: Hesthaven, J.S., Rønquist, E.M. (eds.) Spectral and High Order Methods for Partial Differential Equations: Selected Papers from the ICOSAHOM ’09 Conference. June 22–26, Trondheim, Norway, pp. 43–62. Springer, Berlin (2011)

  10. Ballani, J., Grasedyck, L.: Hierarchical tensor approximation of output quantities of parameter-dependent pdes. SIAM/ASA J. Uncertain. Quantif. 3(1), 852–872 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bastounis, A., Hansen, A.C.: On the absence of the RIP in real-world applications of compressed sensing and the RIP in levels (2014). arXiv:1411.4449

  12. Belloni, A., Chernozhukov, V., Wang, L.: Square-root lasso: pivotal recovery of sparse signals via conic programming. Biometrika 98(4), 791–806 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Belloni, A., Chernozhukov, V., Wang, L.: Pivotal estimation via square-root lasso in nonparametric regression. Ann. Stat. 42(2), 757–788 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bridges, P.G., Ferreira, K.B., Heroux, M.A., Hoemmen, M.: Fault-tolerant linear solvers via selective reliability (2012). arXiv:1206.1390

  15. Brugiapaglia, S., Adcock, B.: Robustness to unknown error in sparse regularization (2017). arXiv:1705.10299

  16. Brugiapaglia, S., Adcock, B., Archibald, R.K.: Recovery guarantees for compressed sensing with unknown errors. In: 2017 International Conference on Sampling Theory and Applications (SampTA). IEEE (2017)

  17. Bunea, F., Lederer, J., She, Y.: The group square-root lasso: theoretical properties and fast algorithms. IEEE Trans. Inform. Theory 60(2), 1313–1325 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  18. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Candès, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell ^1\) minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  20. Chkifa, A., Cohen, A., Migliorati, G., Nobile, F., Tempone, R.: Discrete least squares polynomial approximation with random evaluations—application to parametric and stochastic elliptic pdes. ESAIM Math. Model. Numer. Anal. 49(3), 815–837 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  21. Chkifa, A., Cohen, A., Schwab, C.: High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs. Found. Comput. Math. 14(4), 601–633 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  22. Chkifa, A., Cohen, A., Schwab, C.: Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. J. Math. Pures Appl. 103(2), 400–428 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Chkifa, A., Dexter, N., Tran, H., Webster, C.G.: Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comput. 87(311), 1415–1450 (2018)

  24. Cohen, A., DeVore, R., Schwab, C.: Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs. Found. Comput. Math. 10(6), 615–646 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  25. Cohen, A., DeVore, R., Schwab, C.: Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDE’s. Anal. Appl. 9(01), 11–47 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. de Boor, C., Ron, A.: On multivariate polynomial interpolation. Constr. Approx. 6(3), 287–302 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  27. Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  28. Donoho, D.L., Logan, B.F.: Signal recovery and the large sieve. SIAM J. Appl. Math. 52(2), 577–591 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  29. Doostan, A., Owhadi, H.: A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. 230(8), 3015–3034 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. Dyn, N., Floater, M.S.: Multivariate polynomial interpolation on lower sets. J. Approx. Theory 177(Supplement C), 34–42 (2014)

    Article  MathSciNet  Google Scholar 

  31. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing, Appl. Numer. Harmon. Anal. Springer, New York (2013)

    Book  MATH  Google Scholar 

  32. Friedlander, M.P., Mansour, H., Saab, R., Yilmaz, O.: Recovering compressively sampled signals using partial support information. IEEE Trans. Inform. Theory 58(2), 1122–1134 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  33. Gao, X.: Penalized methods for high-dimensional least absolute deviations regression. Ph.D. Thesis, The University of Iowa (2008)

  34. Gao, X., Huang, J.: Asymptotic analysis of high-dimensional lad regression with lasso. Stat. Sin. 20(4), 1485–1506 (2010)

  35. Grant, M., Boyd, S.: Graph implementations for nonsmooth convex programs. In: Blondel, V., Boyd, S., Kimura, H. (eds.) Recent Advances in Learning and Control. Lecture Notes in Control and Information Sciences, pp. 95–110. Springer-Verlag Limited (2008)

  36. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx, March (2014)

  37. Hastie, T., Tibshirani, R., Wainwright, M.: Statistical Learning with Sparsity: the Lasso and Generalizations. CRC Press, Boca Raton (2015)

    Book  MATH  Google Scholar 

  38. Hecht, F.: New development in FreeFem++. J. Numer. Math. 20(3–4), 251–265 (2012)

    MathSciNet  MATH  Google Scholar 

  39. Jakeman, J.D., Eldred, M.S., Sargsyan, K.: Enhancing \(\ell ^1\)-minimization estimates of polynomial chaos expansions using basis selection. J. Comput. Phys. 289, 18–34 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  40. Laska, J.N., Davenport, M.A., Baraniuk, R.G.: Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. In: 2009 Conference Record of the 43rd Asilomar Conference on Signals, Systems and Computers, pp. 1556–1560. IEEE (2009)

  41. Li, Q., Wang, L.: Robust change point detection method via adaptive lad-lasso. Stat. Pap. 1–13 (2017). https://doi.org/10.1007/s00362-017-0927-3

  42. Li, X.: Compressed sensing and matrix completion with constant proportion of corruptions. Constr. Approx. 37(1), 73–99 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  43. Logan, B.F.: Properties of high-pass signals. Ph.D. Thesis, Columbia University (1965)

  44. Lorentz, G.G., Lorentz, R.A.: Solvability problems of bivariate interpolation I. Constr. Approx. 2(1), 153–169 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  45. Migliorati, G., Nobile, F., von Schwerin, E., Tempone, R.: Analysis of discrete \(L^2\) projection on polynomial spaces with random evaluations. Found. Comput. Math. 14(3), 419–456 (2014)

    MathSciNet  MATH  Google Scholar 

  46. Nguyen, N.H., Tran, T.D.: Exact recoverability from dense corrupted observations via \(\ell _1\)-minimization. IEEE Trans. Inform. Theory 59(4), 2017–2035 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  47. Peng, J., Hampton, J., Doostan, A.: A weighted \(\ell _1\) minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267, 92–111 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  48. Pham, V., El Ghaoui, L.: Robust sketching for multiple square-root lasso problems. In: Artificial Intelligence Statistics, pp. 753–761 (2015)

  49. Rauhut, H., Schwab, C.: Compressive sensing Petrov–Galerkin approximation of high-dimensional parametric operator equations. Math. Comput. 86(304), 661–700 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  50. Rauhut, H., Ward, R.: Interpolation via weighted \(\ell _1\) minimization. Appl. Comput. Harmon. Anal. 40(2), 321–351 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  51. Shin, Y., Xiu, D.: Correcting data corruption errors for multivariate function approximation. SIAM J. Sci. Comput. 38(4), A2492–A2511 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  52. Stankovic, L., Stankovic, S., Amin, M.: Missing samples analysis in signals for applications to l-estimation and compressive sensing. Signal Process. 94, 401–408 (2014)

    Article  Google Scholar 

  53. Stucky, B., van de Geer, S.A.: Sharp oracle inequalities for square root regularization (2015). arXiv:1509.04093

  54. Studer, C., Kuppinger, P., Pope, G., Bolcskei, H.: Recovery of sparsely corrupted signals. IEEE Trans. Inform. Theory 58(5), 3115–3130 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  55. Su, D.: Compressed sensing with corrupted Fourier measurements (2016). arXiv:1607.04926

  56. Su, D.: Data recovery from corrupted observations via l1 minimization (2016). arXiv:1601.06011

  57. Sun, T., Zhang, C.-H.: Scaled sparse linear regression. Biometrika 99(4), 879–898 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  58. Tian, X., Loftus, J.R., Taylor, J.E.: Selective inference with unknown variance via the square-root lasso (2015). arXiv:1504.08031

  59. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Methodol. 58(1), 267–288 (1996)

  60. van de Geer, S.A.: Estimation and Testing Under Sparsity. Springer, Berlin (2016)

    Book  MATH  Google Scholar 

  61. Wagener, J., Dette, H.: The adaptive lasso in high-dimensional sparse heteroscedastic models. Math. Methods Stat. 22(2), 137–154 (2013)

  62. Wang, H., Li, G., Jiang, G.: Robust regression shrinkage and consistent variable selection through the LAD-Lasso. J. Bus. Econo. Stat. 25(3), 347–355 (2007)

    Article  MathSciNet  Google Scholar 

  63. Wright, J., Ma, Y.: Dense error correction via \(\ell ^1\)-minimization. IEEE Trans. Inform. Theory 56(7), 3540–3560 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  64. Xu, J.: Parameter estimation, model selection and inferences in L1-based linear regression. Ph.D. Thesis, Columbia University (2005)

  65. Xu, J., Ying, Z.: Simultaneous estimation and variable selection in median regression using lasso-type penalty. Ann. Inst. Stat. Math. 62(3), 487–514 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  66. Yan, L., Guo, L., Xiu, D.: Stochastic collocation algorithms using \(\ell _1\)-minimization. Int. J. Uncertain. Quantif. 2(3), 279–293 (2012)

    Article  MathSciNet  Google Scholar 

  67. Yang, X., Karniadakis, G.E.: Reweighted \(\ell ^1\) minimization method for stochastic elliptic differential equations. J. Comput. Phys. 248, 87–108 (2013)

    Article  MATH  Google Scholar 

  68. Yu, X., Baek, S.J.: Sufficient conditions on stable recovery of sparse signals with partial support information. IEEE Signal Process. Lett. 20(5), 539–542 (2013)

    Article  Google Scholar 

  69. Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101(476), 1418–1429 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

BA, AB and SB acknowledge the Natural Sciences and Engineering Research Council of Canada through Grant 611675 and the Alfred P. Sloan Foundation and the Pacific Institute for the Mathematical Sciences (PIMS) Collaborative Research Group “High-Dimensional Data Analysis”. SB acknowledges the support of the PIMS Post-doctoral Training Center in Stochastics. The authors are grateful to Claire Boyer, John Jakeman, Richard Lockhart, Akil Narayan, and Clayton G. Webster for interesting and fruitful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simone Brugiapaglia.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adcock, B., Bao, A. & Brugiapaglia, S. Correcting for unknown errors in sparse high-dimensional function approximation. Numer. Math. 142, 667–711 (2019). https://doi.org/10.1007/s00211-019-01051-9

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-019-01051-9

Mathematics Subject Classification

Navigation