Abstract
We propose a generic and simple technique called dyadic rounding for rounding real vectors to zero-one vectors, and show its several applications in approximating singular vectors of matrices by zero-one vectors, cut decompositions of matrices, and norm optimization problems. Our rounding technique leads to the following consequences.
-
1
Given any A ∈ ℝm ×n, there exists z ∈ {0, 1}n such that
$$ \frac{\left\|Az\right\|_{q}}{\left\|z\right\|_{p}} \geq \Omega\left(p^{1 - \frac{1}{p}} (\log n)^{\frac{1}{p} - 1}\right) \left\|A\right\|_{p \mapsto q}, $$where \(\left\|A\right\|_{p \mapsto q} = \max_{x \neq 0} \left\|Ax\right\|_{q} / \left\|x\right\|_{p}\). Moreover, given any vector v ∈ ℝn we can round it to a vector z ∈ {0, 1}n with the same approximation guarantee as above, but now the guarantee is with respect to \(\left\|Av\right\|_{q}/\left\|Av\right\|_{p}\) instead of \(\left\|A\right\|_{p \mapsto q}\). Although stated for p ↦q norm, this generalizes to the case when \(\left\|Az\right\|_{q}\) is replaced by any norm of z.
-
2
Given any A ∈ ℝm ×n, we can efficiently find z ∈ {0, 1}n such that
$$ \frac{\left\|Az\right\|}{\left\|z\right\|} \geq \frac{\sigma_{1}(A)}{2 \sqrt{2 \log n}}, $$where σ 1(A) is the top singular value of A. Extending this, we can efficiently find orthogonal z 1, z 2, …, z k ∈ {0, 1}n such that
$$ \frac{\left\|Az_{i}\right\|}{\left\|z_{i}\right\|} \geq \Omega\left(\frac{\sigma_{k}(A)}{\sqrt{k \log n}}\right), \quad \text{for all $i \in[k]$}. $$We complement these results by showing that they are almost tight.
-
3
Given any A ∈ ℝm ×n of rank r, we can approximate it (under the Frobenius norm) by a sum of O(r log2 m log2 n) cut-matrices, within an error of at most \(\left\|A\right\|_{F}/\text{poly}(m, n)\). In comparison, the Singular Value Decomposition uses r rank-1 terms in the sum (but not necessarily cut matrices) and has zero error, whereas the cut decomposition lemma by Frieze and Kannan in their algorithmic version of Szemerédi’s regularity partition [9,10] uses only O(1/ε 2) cut matrices but has a large \({\epsilon} \sqrt{mn} \left\|A\right\|_{F}\) error (under the cut norm). Our algorithm is deterministic and more efficient for the corresponding error range.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Alon, N., de la Vega, F., Kannan, R., Karpinski, M.: Random sampling and approximation of Max-CSPs. Journal of Computer and System Sciences 67, 212–243 (2003)
Alon, N., Naor, A.: Approximating the cut-norm via Grothendieck’s inequality. SIAM Journal on Computing (SICOMP) 35(4), 787–803 (2006)
Bilu, Y., Linial, N.: Lifts, discrepancy and nearly optimal spectral gaps. Combinatorica 26, 495–519 (2006)
Bollobas, B., Nikiforov, V.: Graphs and hermitian matrices: discrepancy and singular values. Discrete Mathematics 285 (2004)
Boyd, D.W.: The power method for p-norms. Linear Algebra and Its Applications 9, 95–101 (1974)
Charles Brubaker, S., Vempala, S.S.: Random Tensors and Planted Cliques. In: Dinur, I., Jansen, K., Naor, J., Rolim, J. (eds.) APPROX 2009. LNCS, vol. 5687, pp. 406–419. Springer, Heidelberg (2009)
Deshpande, A., Tulsiani, M., Vishnoi, N.: Algorithms and hardness for subspace approximation. In: ACM-SIAM Symposium on Discrete Algorithms, SODA 2011 (2011)
Doerr, B.: Roundings Respecting Hard Constraints. In: Diekert, V., Durand, B. (eds.) STACS 2005. LNCS, vol. 3404, pp. 617–628. Springer, Heidelberg (2005)
Frieze, A., Kannan, R.: The regularity lemma and approximation schemes for dense problems. In: IEEE Symposium on Foundations of Computing (FOCS 1996), pp. 12–20 (1996)
Frieze, A., Kannan, R.: Quick approximation to matrices and applications. Combinatorica 19(2), 175–220 (1999)
Golub, G., van Loan, C.: Matrix Computations. Johns Hopkins University Press (1996)
Nicholas, J.: Higham. Estimating the matrix p-norm. Numerische Mathematik 62, 511–538 (1992)
Kasiviswanathan, S.P., Rudelson, M., Smith, A., Ullman, J.: The price of privately releasing contingency tables and the spectra of random matrices with correlated rows. In: STOC 2010, pp. 775–784 (2010)
Mahoney, M., Drineas, P.: CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences USA 106, 697–702 (2009)
Matoušek, J.: The determinant bound for discrepancy is almost tight (2011), http://arxiv.org/PS_cache/arxiv/pdf/1101/1101.0767v2.pdf
Nesterov, Y.: Semidefinite relaxation and nonconvex quadratic optimization. Optimization Methods and Software 9, 141–160 (1998)
Steinberg, D.: Computation of matrix norms with applications to robust optimization. Research thesis. Technion – Israel University of Technology (2005)
Szemerédi, E.: Regular partitions of graphs. Problèmes combinatoires et théorie des graphes (Colloq. Internat. CNRS), Paris 260, 399–401 (1976)
Vazirani, V.: Approximation Algorithms. Springer (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Deshpande, A., Kannan, R., Srivastava, N. (2012). Zero-One Rounding of Singular Vectors. In: Czumaj, A., Mehlhorn, K., Pitts, A., Wattenhofer, R. (eds) Automata, Languages, and Programming. ICALP 2012. Lecture Notes in Computer Science, vol 7391. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31594-7_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-31594-7_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-31593-0
Online ISBN: 978-3-642-31594-7
eBook Packages: Computer ScienceComputer Science (R0)