Abstract
In many applications, one is faced with an inverse problem, where the known signal depends in a bilinear way on two unknown input vectors. Often at least one of the input vectors is assumed to be sparse, i.e., to have only few non-zero entries. Sparse power factorization (SPF), proposed by Lee, Wu, and Bresler, aims to tackle this problem. They have established recovery guarantees for a somewhat restrictive class of signals under the assumption that the measurements are random. We generalize these recovery guarantees to a significantly enlarged and more realistic signal class at the expense of a moderately increased number of measurements.
Similar content being viewed by others
References
Ahmed, A., Recht, B., Romberg, J.: Blind deconvolution using convex programming. IEEE Trans. Inform. Theory 60(3), 1711–1732 (2014)
Amini, A.A., Wainwright, M.J.: High-dimensional analysis of semidefinite relaxations for sparse principal components. Ann. Stat. 37(5B), 2877–2921 (2009)
Bahmani, S., Romberg, J.: Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements. Inf. Inference 5(3), 331–351 (2016)
Bahmani, S., Romberg, J.: Solving equations of random convex functions via anchored regression. arXiv:1702.05327 (2017)
Berthet, Q., Rigollet, P.: Optimal detection of sparse principal components in high dimension. Ann. Stat. 41(4), 1780–1815 (2013)
Candes, E.J., Li, X., Soltanolkotabi, M.: Phase retrieval via Wirtinger flow: theory and algorithms. IEEE Trans. Inform. Theory 61(4), 1985–2007 (2015)
Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489–509 (2006)
d’Aspremont, A., Bach, F., Ghaoui, L.E.: Optimal solutions for sparse principal component analysis. J. Mach. Learn. Res. 9, 1269–1294 (2008)
Deshpande, Y., Montanari, A.: Sparse PCA via covariance thresholding. In: Advances in Neural Information Processing Systems, pp. 334–342 (2014)
Fornasier, M., Maly, J., Naumova, V.: At-las _ {2, 1}: A multi-penalty approach to compressed sensing of low-rank matrices with sparse decompositions. arXiv:1801.06240 (2018)
Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)
Geppert, J.A., Krahmer, F., Stöger, D.: Refined performance guarantees for sparse power factorization. In: 2017 International Conference on Sampling Theory and Applications (SampTA), pp. 509–513. IEEE (2017)
Haykin, S.: Blind Deconvolution. Prentice Hall, New Jersey (1994)
Iwen, M., Viswanathan, A., Wang, Y.: Robust sparse phase retrieval made easy. Appl. Comput. Harmon. Anal. 42(1), 135–142 (2017)
Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pp. 665–674. ACM, New York (2013)
Journée, M., Nesterov, Y., Richtárik, P., Sepulchre, R.: Generalized power method for sparse principal component analysis. J. Mach. Learn. Res. 11, 517–553 (2010)
Jung, P., Krahmer, F., Stöger, D.: Blind demixing and deconvolution at near-optimal rate. IEEE Trans. Inform. Theory 64(2), 704–727 (2018)
Kech, M., Krahmer, F.: Optimal injectivity conditions for bilinear inverse problems with applications to identifiability of deconvolution problems. SIAM J. Appl. Alg. Geom. 1(1), 20–37 (2017). https://doi.org/10.1137/16M1067469
Krauthgamer, R., Nadler, B., Vilenchik, D.: Do semidefinite relaxations solve sparse PCA up to the information limit. Ann. Statist. 43(3), 1300–1322 (2015)
Lee, K., Junge, M.: Rip-like properties in subsampled blind deconvolution. arXiv:1511.06146 (2015)
Lee, K., Krahmer, F., Romberg, J.: Spectral methods for passive imaging: non-asymptotic performance and robustness. arXiv:1708.04343 (2017)
Lee, K., Li, Y., Junge, M., Bresler, Y.: Blind recovery of sparse signals from subsampled convolution. IEEE Trans. Inform. Theory 63(2), 802–821 (2017)
Lee, K., Wu, Y., Bresler, Y.: Near optimal compressed sensing of a class of sparse low-rank matrices via sparse power factorization. IEEE Trans. Inform Theory (2017)
Li, X., Ling, S., Strohmer, T., Wei, K.: Rapid, robust, and reliable blind deconvolution via nonconvex optimization. arXiv:1606.04933 (2016)
Ling, S., Strohmer, T.: Self-calibration and biconvex compressive sensing. Inverse Probl 31(11), 115,002 (2015)
Ling, S., Strohmer, T.: Blind deconvolution meets blind demixing: algorithms and performance bounds. IEEE Trans. Inform. Theory 63(7), 4497–4520 (2017)
Ling, S., Strohmer, T.: Regularized gradient descent: a nonconvex recipe for fast joint blind deconvolution and demixing. arXiv:1703.08642 (2017)
Ma, Z.: Sparse principal component analysis and iterative thresholding. Ann. Statist. 41(2), 772–801 (2013)
Mendelson, S., Rauhut, H., Ward, R., et al.: Improved bounds for sparse recovery from subsampled random convolutions. Ann. Appl. Probab. 28(6), 3491–3527 (2018)
Needell, D., Tropp, J.A.: Cosamp: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)
Oymak, S., Jalali, A., Fazel, M., Eldar, Y.C., Hassibi, B.: Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inform. Theory 61(5), 2886–2908 (2015)
Qu, Q., Zhang, Y., Eldar, Y.C., Wright, J.: Convolutional phase retrieval via gradient descent. arXiv:1712.00716 (2017)
Soltanolkotabi, M.: Structured signal recovery from quadratic measurements: breaking sample complexity barriers via nonconvex optimization. arXiv:1702.06175 (2017)
Stöger, D., Geppert, J.A., Krahmer, F.: Sparse power factorization with refined peakiness conditions. In: IEEE Statistical Signal Processing Workshop 2018. IEEE (2018)
Tillmann, A.M., Pfetsch, M.E.: The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inform. Theory 60(2), 1248–1259 (2014)
Wang, T., Berthet, Q., Samworth, R.J.: Statistical and computational trade-offs in estimation of sparse principal components. Ann. Statist. 44(5), 1896–1930 (2016)
Xu, G., Liu, H., Tong, L., Kailath, T.: A least-squares approach to blind channel identification. IEEE Trans. Signal Process. 43(12), 2982–2993 (1995)
Acknowledgements
The authors want to thank Yoram Bresler and Kiryung Lee for helpful discussions. Furthermore, we would like to thank the referees for the ir careful reading and their helpful suggestions, which improved the manuscript.
Author information
Authors and Affiliations
Additional information
Communicated by: Holger Rauhut
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Jakob Geppert is supported by the German Science Foundation (DFG) in the Collaborative Research Centre “SFB 755: Nanoscale Photonic Imaging” and partially in the framework of the Research Training Group “GRK 2088: Discovering Structure in Complex Data: Statistics meets Optimization and Inverse Problems.” Felix Krahmer and Dominik Stöger have been supported by the German Science Foundation (DFG) in the context of the joint project “SPP 1798: Bilinear Compressed Sensing” (KR 4512/2-1). The results of this paper have been presented in part at the 12th International Conference on Sampling Theory and Applications, July 3–7, 2017, Tallinn, Estonia [12], and in the IEEE Statistical Signal Processing Workshop 2018, June 10-13, Freiburg, Germany [34].
Appendix: Proof of Lemma 5
Appendix: Proof of Lemma 5
For the proof of Lemma 5, we will use the following result.
Lemma 1 (Lemma 17 and Lemma 18 in 23)
Assume that the (3s1, 3s2, 2)-restrictedisometry property is fulfilled for some restricted isometry constantδ > 0.Assume that the cardinality of\(\widetilde J_{1} \subseteq \left [ n_{1} \right ]\),respectively\(\widetilde J_{2} \subseteq \left [ n_{2}\right ]\)isat most 2s1,respectively 2s2.Then, whenever\(u\in \mathbb {C}^{n_{1}}\)isat most 2s1-sparseand\(v \in \mathbb {C}^{n_{2}}\)isat most 2s2-sparse,we have that
Furthermore for all\( z \in \mathbb {C}^{n} \)and for all\(\widetilde J_{1} \subseteq \left [ n_{1} \right ]\), respectively\(\widetilde J_{2} \subseteq \left [ n_{2}\right ]\), with cardinality at mosts1, respectivelys2, we have that
Proof of Lemma 5
Recall that \(b = \mathcal {A}\left (X\right ) + z \) and define k1 and k2 by
The starting point of our proof is the observation that
where the first inequality is due to the definition of k2 and the second one follows from \( \widetilde {J_{1}} \subset \widehat J_{1} \), which is due to Lemma 4. The right-hand side of the inequality chain can be estimated from below by
In the first inequality, we used \(b= \mathcal {A} \left (uv^{*}\right ) + z\) and the triangle inequality. The second inequality follows from Lemma 7. The last line follows from ∥uv∗∥F = 1 and ∥z∥ = ν. Next, we will estimate the left-hand side of (24) by
The first two lines are obtained by an analogous reasoning as for (25). The last line is due to \( \left \{ k_{2} \right \} \subset \widehat {J_{2}} \), which is a consequence of the definition of \( \widehat J_{2} \) (3) and the definition of {k2} (23). We finish the proof by combining the inequality chains (24), (25), and (26). □
Rights and permissions
About this article
Cite this article
Geppert, J., Krahmer, F. & Stöger, D. Sparse power factorization: balancing peakiness and sample complexity. Adv Comput Math 45, 1711–1728 (2019). https://doi.org/10.1007/s10444-019-09698-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10444-019-09698-6