Abstract
For a family of interpolation norms \({\| \cdot \|_{1,2,s}}\) on \({\mathbb{R}^{n}}\), we provide a distribution over random matrices \({\Phi_s \in \mathbb{R}^{m \times n}}\) parametrized by sparsity level s such that for a fixed set X of K points in \({\mathbb{R}^{n}}\), if \({m \geq C s \log(K)}\) then with high probability, \({\frac{1}{2} \| \varvec{x} \|_{1,2,s} \leq \| \Phi_s (\varvec{x}) \|_1 \leq 2 \| \varvec{x} \|_{1,2,s}}\) for all \({\varvec{x} \in X}\). Several existing results in the literature roughly reduce to special cases of this result at different values of s: For s = n, \({\| \varvec{x} \|_{1,2,n} \equiv \| \varvec{x} \|_{1}}\) and we recover that dimension reducing linear maps can preserve the ℓ 1-norm up to a distortion proportional to the dimension reduction factor, which is known to be the best possible such result. For s = 1, \({\| \varvec{x} \|_{1,2,1} \equiv \| \varvec{x} \|_{2}}\), and we recover an ℓ 2/ℓ 1 variant of the Johnson–Lindenstrauss Lemma for Gaussian random matrices. Finally, if \({\varvec{x}}\) is s- sparse, then \({\| \varvec{x} \|_{1,2,s} = \| \varvec{x} \|_1}\) and we recover that s-sparse vectors in \({\ell_1^n}\) embed into \({\ell_1^{\mathcal{O}(s \log(n))}}\) via sparse random matrix constructions.
Similar content being viewed by others
References
Arratia R., Gordon L.: Tutorial on large deviations for the binomial distribution. Bull. Math. Biol. 51(1), 125–131 (1989)
Baraniuk, R., Wakin, M.: Random projections of smooth manifolds. In: Found. Comp. Math, pp. 941–944 (2006)
Baraniuk R.G., Davenport M., DeVore R.A., Wakin M.: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)
Bennett, C., Sharpley, R.: Interpolation of Operators. New York (1988)
Berinde, R., Gilbert, A.C., Indyk, P., Karloff, H., Strauss, M.J.: Combining geometry and combinatorics: a unified approach to sparse signal recovery. In: 2008 46th Annual Allerton Conference on Communication, Control, and Computing, pp. 798–805 (2008)
Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U.: When is “nearest neighbor” meaningful? In: Proceedings of International Conference on Database Theory, pp. 217–235 (1999)
Cai T.T., Wang L., Xu G.: New bounds for restricted isometry constants. IEEE Trans. Inform. Theory 56(9), 4388–4394 (2010)
Candès E., Eldar Y., Needell D., Paige R.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31, 59–73 (2011)
Charikar, M., Sahai, A.: Dimension reduction in the \({\ell_1}\) norm. In: Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science, pp. 551–560. IEEE (2002)
Cvetkovski, Z.: Schur’s inequality, Muirhead’s inequality and Karamata’s inequality. In: Inequalities, pp. 121–132. Springer, Berlin (2012)
Dasgupta S., Gupta A.: An elementary proof of a theorem of Johnson and Lindenstrauss. Random Struct. Algorithms 22(1), 60–65 (2003)
Foucart S., Rauhut H.: A Mathematical Introduction to Compressive Sensing. Springer, Berlin (2013)
Garnaev A.Y., Gluskin E.D.: The widths of a Euclidean ball (Russian). Dokl. Akad. Nauk SSSR 277(5), 1048–1052 (1984)
Haagerup U.: The best constants in the khintchine inequality. Studia Math. 70(3), 231–283 (1981)
Halko N., Martinsson P., Tropp J.: Finding structure with randomness: stochastic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)
Hinneburg, A., Aggarwal, C., Keim, D.: What is the nearest neighbor in high dimensional spaces? In: Proceedings of 26th International Conference on Very Large Data Bases, pp. 506–515 (2000)
Holmstedt T.: Interpolation of quasi-normed spaces. Math. Scand. 26, 177–199 (1970)
Indyk, P.: Algorithmic applications of low-distortion embeddings. In: Proceedings of 42nd IEEE Symposium on Foundations of Computer Science (2001)
Indyk P.: Stable distributions, pseudorandom generators, embeddings and data stream computation. J. ACM 53(3), 307–323 (2006)
Johnson W.B., Lindenstrauss J.: Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math. 26, 189–206 (1984)
Kane, D., Nelson, J., Woodruff, D.: On the exact space complexity of sketching and streaming small norms. In: SODA, pp. 1161–1178 (2010)
Krahmer F., Ward R.: New and improved Johnson–Lindenstrauss embeddings via the restricted isometry property. SIAM J. Math. Anal. 43(3), 1269–1281 (2011)
Lee J., Mendel M., Naor A.: Metric structures in l 1: dimension, snowflakes, and average distortion. Eur. J. Combin. 26(8), 1180–1190 (2005)
Li, P.: Estimators and tail bounds for dimension reduction in \({\ell_q}\), (\({0 < q \leq 2}\)) using stable random projections. In: SODA, pp. 10–19 (2008)
Liberty E., Woolfe F., Martinsson P., Rokhlin V., Tygert M.: Randomized algorithms for the low-rank approximation of matrices. Proc. Natl. Acad. Sci. 104(51), 20167–20172 (2007)
Litvak A.E., Pajor A., Rudelson M., Tomczak-Jaegermann N.: Smallest singular value of random matrices and geometry of random polytopes. Adv. Math. 195(2), 491–523 (2005)
Mitzenmacher M., Upfal E.: Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, Cambridge (2005)
Montgomery-Smith S.J.: The distribution of Rademacher sums. Proc. Am. Math. Soc. 109(2), 517–522 (1990)
Nelson, J., Woodruff, D.: Fast Manhattan sketches in data streams. In: PODS, pp. 99–110 (2010)
Newman, I., Rabinovich, Y.: Finite volume spaces and sparsification. arXiv:1002.3541 (2010, preprint)
Plan, Y., Vershynin, R.: One-bit compressed sensing by linear programming. Comm. Pure Appl. Math. 66(8), 1275–1297 (2013)
Sarlos, T.: Improved approximation algorithms for large matrices via random projections. In: Proceedings of the 47th IEEE Symposium on Foundations of Computer Science (FOCS) (2006)
Schechtman, G.: Dimension reduction in l p , 0 < p < 2. arXiv:1110.2148 (2011, preprint)
Thorup M., Zhang Y.: Tabulation-based 5-independent hashing with applications to linear probing and second moment estimation. SIAM J. Comput. 41(2), 293–331 (2012)
Ward R.: Compressed sensing with cross validation. IEEE Trans. Inform. Theory 55, 5773–5782 (2009)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Krahmer, F., Ward, R. A Unified Framework for Linear Dimensionality Reduction in L1. Results. Math. 70, 209–231 (2016). https://doi.org/10.1007/s00025-015-0475-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00025-015-0475-x