Compressive Sampling Methods for Sparse Polynomial Chaos Expansions

  • Jerrad Hampton
  • Alireza Doostan
Reference work entry


A salient taskin uncertainty quantification (UQ) is to study the dependence of a quantity of interest (QoI) on input variables representing system uncertainties. Relying on linear expansions of the QoI in orthogonal polynomial bases of inputs, polynomial chaos expansions (PCEs) are now among the widely used methods in UQ. When there exists a smoothness in the solution being approximated, the PCE exhibits sparsity in that a small fraction of expansion coefficients are significant. By exploiting this sparsity, compressive sampling, also known as compressed sensing, provides a natural framework for accurate PCE using relatively few evaluations of the QoI and in a manner that does not require intrusion into legacy solvers. The PCE possesses a rich structure between the QoI being approximated, the polynomials, and input variables used to perform the approximation and where the QoI is evaluated. In this chapter insights are provided into this structure, summarizing a portion of the current literature on PCE via compressive sampling within the context of UQ.


Legendre Polynomials Hermite Polynomials Orthogonal Polynomials Compressed Sensing Polynomial Chaos Expansions Markov Chain Monte Carlo 1-minimization Basis Pursuit Sparse Approximation 


  1. 1.
    Adcock, B.: Infinite-dimensional 1 minimization and function approximation from pointwise data. arXiv preprint arXiv:150302352 (2015)Google Scholar
  2. 2.
    Arlot, S., Celisse, A.: A survey of cross-validation procedures for model selection. Stat. Surv. 4, 40–79 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Askey, R., Wainger, S.: Mean convergence of expansions in Laguerre and hermite series. Am. J. Math. 87(3), 695–708 (1965)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Askey, R.A., Arthur, W.J.: Some Basic Hypergeometric Orthogonal Polynomials That Generalize Jacobi Polynomials, vol. 319. AMS, Providence (1985)zbMATHGoogle Scholar
  5. 5.
    Babacan, S., Molina, R., Katsaggelos, A.: Bayesian compressive sensing using laplace priors. IEEE Trans. Image Process. 19(1), 53–63 (2010)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Becker, S., Bobin, J., Candès, E.J.: NESTA: A fast and accurate first-order method for sparse recovery. ArXiv e-prints (2009). Available from
  7. 7.
    Berg, E.v., Friedlander, M.P.: SPGL1: a solver for large-scale sparse reconstruction (2007). Available from
  8. 8.
    Berveiller, M., Sudret, B., Lemaire, M.: Stochastic finite element: a non intrusive approach by regression. Eur. J. Comput. Mech. Revue (Européenne de Mécanique Numérique) 15(1–3), 81–92 (2006)Google Scholar
  9. 9.
    Blatman, G., Sudret, B.: Adaptive sparse polynomial chaos expansion based on least angle regression. J. Comput. Phys. 230, 2345–2367 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Bouchot, J.L., Bykowski, B., Rauhut, H., Schwab, C.: Compressed sensing Petrov-Galerkin approximations for parametric PDEs. In: International Conference on Sampling Theory and Applications (SampTA 2015), pp. 528–532. IEEE (2015)Google Scholar
  11. 11.
    Boufounos, P., Duarte, M., Baraniuk, R.: Sparse signal reconstruction from noisy compressive measurements using cross validation. In: Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing (SSP’07), Madison, pp. 299–303. IEEE Computer Society (2007)Google Scholar
  12. 12.
    Bruckstein, A.M., Donoho, D.L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Candès, E., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Candès, E., Tao, T.: Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Candès, E., Wakin, M.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008)CrossRefGoogle Scholar
  16. 16.
    Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Candès, E., Wakin, M., Boyd, S.: Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Math. 346(9), 589–592 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Candés, E.J., Plan, Y.: A probabilistic and ripless theory of compressed sensing. IEEE Trans. Inf. Theory 57(11), 7235–7254 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Candes, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las Vegas (2008)Google Scholar
  22. 22.
    Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Chen, S., Donoho, D., Saunders, M.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Davenport, M.A., Duarte, M.F., Eldar, Y.C., Kutyniok, G.: Introduction to Compressed Sensing. Cambridge University Press, Cambridge (2012)CrossRefGoogle Scholar
  26. 26.
    Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Donoho, D., Huo, X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001). doi:10.1109/18.959265MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Donoho, D., Tanner, J.: Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. R Soc. A Math. Phys. Eng. Sci. 367(1906), 4273–4293 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Donoho, D., Elad, M., Temlyakov, V.: Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52(1), 6–18 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Donoho, D., Stodden, V., Tsaig, Y.: About SparseLab (2007)Google Scholar
  31. 31.
    Doostan, A., Owhadi, H.: A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. 230, 3015–3034 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Doostan, A., Owhadi, H., Lashgari, A., Iaccarino, G.: Non-adapted sparse approximation of PDEs with stochastic inputs. Tech. Rep. Annual Research Brief, Center for Turbulence Research, Stanford University (2009)Google Scholar
  33. 33.
    Eldar, Y., Kutyniok, G.: Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)CrossRefGoogle Scholar
  34. 34.
    Foucart, S.: A note on guaranteed sparse recovery via, 1-minimization. Appl. Comput. Harmonic Anal. 29(1), 97–103 (2010). ElsevierGoogle Scholar
  35. 35.
    Gerstner, T., Griebel, M.: Numerical integration using sparse grids. Numer. Algorithms 18(3–4):209–232 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Ghanem, R., Spanos, P.: Stochastic Finite Elements: A Spectral Approach. Dover, Minneola (2002)zbMATHGoogle Scholar
  37. 37.
    Gilks, W.R., Richardson, S., Spiegelhalter, D.J.: Markov Chain Monte Carlo in Practice, vol 2. CRC Press, Boca Raton (1996)Google Scholar
  38. 38.
    Hadigol, M., Maute, K., Doostan, A.: On uncertainty quantification of lithium-ion batteries. arXiv preprint arXiv:150507776 (2015)Google Scholar
  39. 39.
    Hampton, J., Doostan, A.: Coherence motivated sampling and convergence analysis of least squares polynomial chaos regression. Comput. Methods Appl. Mech. Eng. 290, 73–97 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Hampton, J., Doostan, A.: Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies. J. Comput. Phys. 280, 363–386 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Hosder, S., Walters, R., Perez, R.: A non-intrusive polynomial chaos method for uncertainty propagation in CFD simulations. In: 44th AIAA Aerospace Sciences Meeting and Exhibit, AIAA-2006-891, Reno (NV) (2006)Google Scholar
  42. 42.
    Huang, A.: A re-weighted algorithm for designing data dependent sensing dictionary. Int. J. Phys. Sci. 6(3), 386–390 (2011)Google Scholar
  43. 43.
    Jakeman, J., Eldred, M., Sargsyan, K.: Enhancing 1-minimization estimates of polynomial chaos expansions using basis selection. J. Comput. Phys. 289, 18–34 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  44. 44.
    Jakeman, J.D., Eldred, M.S., Sargsyan, K.: Enhancing 1-minimization estimates of polynomial chaos expansions using basis selection. ArXiv e-prints 1407.8093 (2014)Google Scholar
  45. 45.
    Ji, S., Xue, Y., Carin, L.: Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008)MathSciNetCrossRefGoogle Scholar
  46. 46.
    Jones, B., Parrish, N., Doostan, A.: Postmaneuver collision probability estimation using sparse polynomial chaos expansions. J. Guidance Control Dyn. 38(8), 1–13 (2015)CrossRefGoogle Scholar
  47. 47.
    Juditsky, A., Nemirovski, A.: Accuracy guarantees for 1-recovery. IEEE Trans. Inf. Theory 57, 7818–7839 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    Juditsky, A., Nemirovski, A.: On verifiable sufficient conditions for sparse signal recovery via 1 minimization. Math. Program. 127(1), 57–88 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  49. 49.
    Karagiannis, G., Lin, G.: Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs. J. Comput. Phys. 259, 114–134 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    Karagiannis, G., Konomi, B., Lin, G.: A Bayesian mixed shrinkage prior procedure for spatial–stochastic basis selection and evaluation of gPC expansions: Applications to elliptic SPDEs. J. Comput. Phys. 284, 528–546 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  51. 51.
    Khajehnejad, M.A., Xu, W., Avestimehr, A.S., Hassibi, B.: Improved sparse recovery thresholds with two-step reweighted 1 minimization. In: 2010 IEEE International Symposium on Information Theory Proceedings (ISIT), Austin, pp. 1603–1607. IEEE (2010)Google Scholar
  52. 52.
    Komkov, V., Choi, K., Haug, E.: Design Sensitivity Analysis of Structural Systems, vol. 177. Academic, Orlando (1986)zbMATHGoogle Scholar
  53. 53.
    Krahmer, F., Ward, R.: Beyond incoherence: stable and robust sampling strategies for compressive imaging. arXiv preprint arXiv:12102380 (2012)Google Scholar
  54. 54.
    Krasikov, I.: New bounds on the Hermite polynomials. ArXiv Mathematics e-prints math/0401310 (2004)Google Scholar
  55. 55.
    Ma, X., Zabaras, N.: An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method. Inverse Probl. 25, 035,013+ (2009)Google Scholar
  56. 56.
    Maitre, O.L., Knio, O.: Spectral Methods for Uncertainty Quantification with Applications to Computational Fluid Dynamics. Springer, Dordrecht/New York (2010)CrossRefzbMATHGoogle Scholar
  57. 57.
    Mathelin, L., Gallivan, K.: A compressed sensing approach for partial differential equations with random input data. Commun. Comput. Phys. 12, 919–954 (2012)MathSciNetCrossRefGoogle Scholar
  58. 58.
    Mo, Q., Li, S.: New bounds on the restricted isometry constant δ 2k. Appl. Comput. Harmonic Anal. 31(3), 460–468 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  59. 59.
    Narayan, A., Zhou, T.: Stochastic collocation on unstructured multivariate meshes. Commun. Comput. Phys. 18, 1–36 (2015)MathSciNetCrossRefGoogle Scholar
  60. 60.
    Narayan, A., Jakeman, J.D., Zhou, T.: A Christoffel function weighted least squares algorithm for collocation approximations. arXiv preprint arXiv:14124305 (2014)Google Scholar
  61. 61.
    Needell, D.: Noisy signal recovery via iterative reweighted 1-minimization. In: Proceedings of the Asilomar Conference on Signals, Systems, and Computers, Pacific Grove (2009)CrossRefGoogle Scholar
  62. 62.
    Needell, D., Tropp, J.: CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmonic Anal. 26(3), 301–321 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  63. 63.
    Park, T., Casella, G.: The Bayesian lasso. J. Am. Stat. Assoc. 103(482), 681–686 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  64. 64.
    Peng, J., Hampton, J., Doostan, A.: A weighted 1-minimization approach for sparse polynomial chaos expansions. J. Comput. Phys. 267, 92–111 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  65. 65.
    Peng, J., Hampton, J., Doostan, A.: On polynomial chaos expansion via gradient-enhanced 1-minimization. arXiv preprint arXiv:150600343 (2015)Google Scholar
  66. 66.
    Quéré, P.L.: Accurate solutions to the square thermally driven cavity at high rayleigh number. Comput. Fluids 20(1), 29–41 (1991)CrossRefzbMATHGoogle Scholar
  67. 67.
    Rall, L.B.: Automatic Differentiation: Techniques and Applications, vol. 120. Springer, Berlin (1981)CrossRefzbMATHGoogle Scholar
  68. 68.
    Rauhut, H.: Compressive sensing and structured random matrices. Theor. Found. Numer. Methods Sparse Recover. 9, 1–92 (2010)MathSciNetzbMATHGoogle Scholar
  69. 69.
    Rauhut, H., Ward, R.: Sparse Legendre expansions via 1-minimization. J. Approx. Theory 164(5), 517–533 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  70. 70.
    Rauhut, H., Ward, R.: Interpolation via weighted minimization. Appl. Comput. Harmonic Anal. 40, 321–351 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  71. 71.
    Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer Texts in Statistics. Springer, New York (2004)CrossRefzbMATHGoogle Scholar
  72. 72.
    Sargsyan, K., Safta, C., Najm, H., Debusschere, B., Ricciuto, D., Thornton, P.: Dimensionality reduction for complex models via Bayesian compressive sensing. Int. J. Uncertain. Quantif. 4, 63–93 (2013)MathSciNetCrossRefGoogle Scholar
  73. 73.
    Savin, E., Resmini, A., Peter, J.: Sparse polynomial surrogates for aerodynamic computations with random inputs. arXiv preprint arXiv:150602318 (2015)Google Scholar
  74. 74.
    Schiavazzi, D., Doostan, A., Iaccarino, G.: Sparse multiresolution regression for uncertainty propagation. Int. J. Uncertain. Quantif. (2014). doi:10.1615/Int.J.UncertaintyQuanti fication.2014010147Google Scholar
  75. 75.
    Schoutens, W.: Stochastic Processes and Orthogonal Polynomials. Springer, New York (2000)CrossRefzbMATHGoogle Scholar
  76. 76.
    Smolyak, S.: Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Mathematics, Doklady 4, 240–243 (1963)zbMATHGoogle Scholar
  77. 77.
    Szegö G (1939) Orthongonal Polynomials. American Mathematical Society, American Mathematical SocietyGoogle Scholar
  78. 78.
    Tang, G., Iaccarino, G.: Subsampled gauss quadrature nodes for estimating polynomial chaos expansions. SIAM/ASA J. Uncertain. Quantif. 2(1), 423–443 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  79. 79.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat.l Soc. (Ser. B) 58, 267–288 (1996)Google Scholar
  80. 80.
    Tropp, J.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004). doi:10.1109/TIT.2004.834793MathSciNetCrossRefzbMATHGoogle Scholar
  81. 81.
    Tropp, J.A., Anna, G.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53, 4655–4666 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  82. 82.
    Ward, R.: Compressed sensing with cross validation. IEEE Trans. Inf. Theory 55(12), 5773–5782 (2009)MathSciNetCrossRefGoogle Scholar
  83. 83.
    West, T., Brune, A., Hosder, S., Johnston, C.: Uncertainty analysis of radiative heating predictions for titan entry. J. Thermophys. Heat Transf. 1–14 (2015)Google Scholar
  84. 84.
    Xiu, D.: Numerical Methods for Stochastic Computations: A Spectral Method Approach. Princeton University Press, Princeton (2010)zbMATHGoogle Scholar
  85. 85.
    Xiu, D., Hesthaven, J.: High-order collocation methods for differential equations with random inputs. SIAM J. Sci. Comput. 27(3), 1118–1139 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  86. 86.
    Xiu, D., Karniadakis, G.: The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24(2), 619–644 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  87. 87.
    Xu, W., Khajehnejad, M., Avestimehr, A., Hassibi, B.: Breaking through the thresholds: an analysis for iterative reweighted 1 minimization via the Grassmann Angle Framework (2009). ArXiv e-prints Available from
  88. 88.
    Xu, Z., Zhou, T.: On sparse interpolation and the design of deterministic interpolation points. SIAM J. Sci. Comput. 36(4), A1752–A1769 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  89. 89.
    Yan, L., Guo, L., Xiu, D.: Stochastic collocation algorithms using 1-minimization. Int. J. Uncertain. Quantif. 2(3), 279–293 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  90. 90.
    Yang, X., Karniadakis, G.E.: Reweighted 1 minimization method for stochastic elliptic differential equations. J. Comput. Phys. 248, 87–108 (2013)CrossRefzbMATHGoogle Scholar
  91. 91.
    Yang, X., Lei, H., Baker, N., Lin, G.: Enhancing sparsity of hermite polynomial expansions by iterative rotations. arXiv preprint arXiv:150604344 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2017

Authors and Affiliations

  • Jerrad Hampton
    • 1
  • Alireza Doostan
    • 1
  1. 1.Aerospace Engineering SciencesUniversity of ColoradoBoulderUSA

Personalised recommendations