A Simple Tool for Bounding the Deviation of Random Matrices on Geometric Sets

  • Christopher Liaw
  • Abbas Mehrabian
  • Yaniv Plan
  • Roman VershyninEmail author
Part of the Lecture Notes in Mathematics book series (LNM, volume 2169)


Let A be an isotropic, sub-gaussian m × n matrix. We prove that the process \(Z_{x}\,:=\,\left \|Ax\right \|_{2} -\sqrt{m}\left \|x\right \|_{2}\) has sub-gaussian increments, that is, \(\|Z_{x} - Z_{y}\|_{\psi _{2}} \leq C\|x - y\|_{2}\) for any \(x,y \in \mathbb{R}^{n}\). Using this, we show that for any bounded set \(T \subseteq \mathbb{R}^{n}\), the deviation of ∥ Ax ∥ 2 around its mean is uniformly bounded by the Gaussian complexity of T. We also prove a local version of this theorem, which allows for unbounded sets. These theorems have various applications, some of which are reviewed in this paper. In particular, we give a new result regarding model selection in the constrained linear model.


Random Matrix Tangent Cone Random Projection Orlicz Function Random Subspace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



Christopher Liaw is partially supported by an NSERC graduate scholarship. Abbas Mehrabian is supported by an NSERC Postdoctoral Fellowship. Yaniv Plan is partially supported by NSERC grant 22R23068. Roman Vershynin is partially supported by NSF grant DMS 1265782 and USAF Grant FA9550-14-1-0009.


  1. 1.
    D. Amelunxen, M. Lotz, M.B. McCoy, J.A. Tropp, Living on the edge: phase transitions in convex programs with random data. Inf. Inference 3 (3), 224–294 (2014). doi: 10.1093/imaiai/iau005.
  2. 2.
    S. Artstein-Avidan, A. Giannopoulos, V.D. Milman, Asymptotic geometric analysis. Part I, in Mathematical Surveys and Monographs, vol. 202 (American Mathematical Society, Providence, RI, 2015)Google Scholar
  3. 3.
    V. Chandrasekaran, B. Recht, P.A. Parrilo, A.S. Willsky, The convex geometry of linear inverse problems. Found. Comput. Math. 12 (6), 805–849 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    S. Dirksen, Tail bounds via generic chaining. Electron. J. Probab. 20 (53), 1–29 (2015). doi: 10.1214/EJP.v20-3760.
  5. 5.
    Y.C. Eldar, G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012)CrossRefGoogle Scholar
  6. 6.
    S. Foucart, H. Rauhut, A mathematical introduction to compressive sensing, in Applied and Numerical Harmonic Analysis (Birkhäuser/Springer, New York, 2013) doi: 10.1007/978-0-8176-4948-7.
  7. 7.
    Y. Gordon, On Milman’s inequality and random subspaces which escape through a mesh in R n, in Geometric Aspects of Functional Analysis (1986/87). Lecture Notes in Mathematics, vol. 1317 (Springer, Berlin, 1988), pp. 84–106. doi: 10.1007/BFb0081737.
  8. 8.
    D. Gross, Y.K. Liu, S.T. Flammia, S. Becker, J. Eisert, Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105 (15), 150,401 (2010)Google Scholar
  9. 9.
    W.B. Johnson, J. Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math. 26 (1), 189–206 (1984)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    B. Klartag, S. Mendelson, Empirical processes and random projections. J. Funct. Anal. 225 (1), 229–245 (2005). doi: 10.1016/j.jfa.2004.10.009.
  11. 11.
    G. Lecué, S. Mendelson, Learning subgaussian classes: upper and minimax bounds (2013). Available at
  12. 12.
    M. Lustig, D. Donoho, J.M. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58 (6), 1182–1195 (2007)CrossRefGoogle Scholar
  13. 13.
    S. Mendelson, A. Pajor, N. Tomczak-Jaegermann, Reconstruction and subgaussian operators in asymptotic geometric analysis. Geom. Funct. Anal. 17 (4), 1248–1282 (2007). doi: 10.1007/s00039-007-0618-7.
  14. 14.
    V.D. Milman, Geometrical inequalities and mixed volumes in the local theory of Banach spaces. Astérisque 131, 373–400 (1985)MathSciNetzbMATHGoogle Scholar
  15. 15.
    V.D. Milman, Random subspaces of proportional dimension of finite dimensional normed spaces: approach through the isoperimetric inequality, in Banach Spaces (Springer, Berlin, 1985), pp. 106–115zbMATHGoogle Scholar
  16. 16.
    S. Oymak, C. Thrampoulidis, B. Hassibi, Simple bounds for noisy linear inverse problems with exact side information (2013). Available at
  17. 17.
    S. Oymak, C. Thrampoulidis, B. Hassibi, The squared-error of generalized lasso: a precise analysis, in 51st Annual Allerton Conference on Communication, Control, and Computing, IEEE (2013), pp. 1002–1009Google Scholar
  18. 18.
    A. Pajor, N. Tomczak-Jaegermann, Subspaces of small codimension of finite-dimensional banach spaces. Proc. Am. Math. Soc. 97 (4), 637–642 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Y. Plan, R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inform. Theory 59 (1), 482–494 (2013). doi: 10.1109/TIT.2012.2207945.
  20. 20.
    Y. Plan, R. Vershynin, The generalized lasso with non-linear observations. IEEE Trans. Inform. Theory 62 (3), 1528–1537 (2016). doi: 10.1109/TIT.2016.2517008 MathSciNetCrossRefGoogle Scholar
  21. 21.
    Y. Plan, R. Vershynin, E. Yudovina, High-dimensional estimation with geometric constraints (2014). Available at
  22. 22.
    M. Rudelson, R. Vershynin, Hanson-Wright inequality and sub-Gaussian concentration. Electron. Commun. Probab. 18 (82), 1–9 (2013). doi: 10.1214/ECP.v18-2865.
  23. 23.
    G. Schechtman, Two observations regarding embedding subsets of Euclidean spaces in normed spaces. Adv. Math. 200 (1), 125–135 (2006). doi: 10.1016/j.aim.2004.11.003.
  24. 24.
    M. Talagrand, The generic chaining: upper and lower bounds of stochastic processes, in Springer Monographs in Mathematics (Springer, Berlin, 2005)zbMATHGoogle Scholar
  25. 25.
    C. Thrampoulidis, S. Oymak, B. Hassibi, Simple error bounds for regularized noisy linear inverse problems, in IEEE International Symposium on Information Theory (ISIT), IEEE (2014), pp. 3007–3011Google Scholar
  26. 26.
    R. Vershynin, How close is the sample covariance matrix to the actual covariance matrix? J. Theor. Probab. 25 (3), 655–686 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, in Compressed Sensing (Cambridge University Press, Cambridge, 2012), pp. 210–268Google Scholar
  28. 28.
    R. Vershynin, Estimation in high dimensions: a geometric perspective, in Sampling Theory, A Renaissance (Birkhauser, Basel, 2015), pp. 3–66CrossRefzbMATHGoogle Scholar
  29. 29.
    J. Von Neumann, Collected Works, ed. by A.H. Taub (Pergamon, Oxford, 1961)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Christopher Liaw
    • 1
  • Abbas Mehrabian
    • 1
    • 2
  • Yaniv Plan
    • 3
  • Roman Vershynin
    • 4
    Email author
  1. 1.Department of Computer ScienceUniversity of British ColumbiaVancouverCanada
  2. 2.School of Computing ScienceSimon Fraser UniversityBurnabyCanada
  3. 3.Department of MathematicsUniversity of British ColumbiaVancouverCanada
  4. 4.Department of MathematicsUniversity of MichiganAnn ArborUSA

Personalised recommendations