Skip to main content
Log in

Sparse power factorization: balancing peakiness and sample complexity

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

In many applications, one is faced with an inverse problem, where the known signal depends in a bilinear way on two unknown input vectors. Often at least one of the input vectors is assumed to be sparse, i.e., to have only few non-zero entries. Sparse power factorization (SPF), proposed by Lee, Wu, and Bresler, aims to tackle this problem. They have established recovery guarantees for a somewhat restrictive class of signals under the assumption that the measurements are random. We generalize these recovery guarantees to a significantly enlarged and more realistic signal class at the expense of a moderately increased number of measurements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ahmed, A., Recht, B., Romberg, J.: Blind deconvolution using convex programming. IEEE Trans. Inform. Theory 60(3), 1711–1732 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  2. Amini, A.A., Wainwright, M.J.: High-dimensional analysis of semidefinite relaxations for sparse principal components. Ann. Stat. 37(5B), 2877–2921 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bahmani, S., Romberg, J.: Near-optimal estimation of simultaneously sparse and low-rank matrices from nested linear measurements. Inf. Inference 5(3), 331–351 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bahmani, S., Romberg, J.: Solving equations of random convex functions via anchored regression. arXiv:1702.05327 (2017)

  5. Berthet, Q., Rigollet, P.: Optimal detection of sparse principal components in high dimension. Ann. Stat. 41(4), 1780–1815 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Candes, E.J., Li, X., Soltanolkotabi, M.: Phase retrieval via Wirtinger flow: theory and algorithms. IEEE Trans. Inform. Theory 61(4), 1985–2007 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. d’Aspremont, A., Bach, F., Ghaoui, L.E.: Optimal solutions for sparse principal component analysis. J. Mach. Learn. Res. 9, 1269–1294 (2008)

    MathSciNet  MATH  Google Scholar 

  9. Deshpande, Y., Montanari, A.: Sparse PCA via covariance thresholding. In: Advances in Neural Information Processing Systems, pp. 334–342 (2014)

  10. Fornasier, M., Maly, J., Naumova, V.: At-las _ {2, 1}: A multi-penalty approach to compressed sensing of low-rank matrices with sparse decompositions. arXiv:1801.06240 (2018)

  11. Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Geppert, J.A., Krahmer, F., Stöger, D.: Refined performance guarantees for sparse power factorization. In: 2017 International Conference on Sampling Theory and Applications (SampTA), pp. 509–513. IEEE (2017)

  13. Haykin, S.: Blind Deconvolution. Prentice Hall, New Jersey (1994)

    Google Scholar 

  14. Iwen, M., Viswanathan, A., Wang, Y.: Robust sparse phase retrieval made easy. Appl. Comput. Harmon. Anal. 42(1), 135–142 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Jain, P., Netrapalli, P., Sanghavi, S.: Low-rank matrix completion using alternating minimization. In: Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pp. 665–674. ACM, New York (2013)

  16. Journée, M., Nesterov, Y., Richtárik, P., Sepulchre, R.: Generalized power method for sparse principal component analysis. J. Mach. Learn. Res. 11, 517–553 (2010)

    MathSciNet  MATH  Google Scholar 

  17. Jung, P., Krahmer, F., Stöger, D.: Blind demixing and deconvolution at near-optimal rate. IEEE Trans. Inform. Theory 64(2), 704–727 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kech, M., Krahmer, F.: Optimal injectivity conditions for bilinear inverse problems with applications to identifiability of deconvolution problems. SIAM J. Appl. Alg. Geom. 1(1), 20–37 (2017). https://doi.org/10.1137/16M1067469

    MathSciNet  MATH  Google Scholar 

  19. Krauthgamer, R., Nadler, B., Vilenchik, D.: Do semidefinite relaxations solve sparse PCA up to the information limit. Ann. Statist. 43(3), 1300–1322 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  20. Lee, K., Junge, M.: Rip-like properties in subsampled blind deconvolution. arXiv:1511.06146 (2015)

  21. Lee, K., Krahmer, F., Romberg, J.: Spectral methods for passive imaging: non-asymptotic performance and robustness. arXiv:1708.04343 (2017)

  22. Lee, K., Li, Y., Junge, M., Bresler, Y.: Blind recovery of sparse signals from subsampled convolution. IEEE Trans. Inform. Theory 63(2), 802–821 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lee, K., Wu, Y., Bresler, Y.: Near optimal compressed sensing of a class of sparse low-rank matrices via sparse power factorization. IEEE Trans. Inform Theory (2017)

  24. Li, X., Ling, S., Strohmer, T., Wei, K.: Rapid, robust, and reliable blind deconvolution via nonconvex optimization. arXiv:1606.04933 (2016)

  25. Ling, S., Strohmer, T.: Self-calibration and biconvex compressive sensing. Inverse Probl 31(11), 115,002 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  26. Ling, S., Strohmer, T.: Blind deconvolution meets blind demixing: algorithms and performance bounds. IEEE Trans. Inform. Theory 63(7), 4497–4520 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  27. Ling, S., Strohmer, T.: Regularized gradient descent: a nonconvex recipe for fast joint blind deconvolution and demixing. arXiv:1703.08642 (2017)

  28. Ma, Z.: Sparse principal component analysis and iterative thresholding. Ann. Statist. 41(2), 772–801 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  29. Mendelson, S., Rauhut, H., Ward, R., et al.: Improved bounds for sparse recovery from subsampled random convolutions. Ann. Appl. Probab. 28(6), 3491–3527 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  30. Needell, D., Tropp, J.A.: Cosamp: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  31. Oymak, S., Jalali, A., Fazel, M., Eldar, Y.C., Hassibi, B.: Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inform. Theory 61(5), 2886–2908 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Qu, Q., Zhang, Y., Eldar, Y.C., Wright, J.: Convolutional phase retrieval via gradient descent. arXiv:1712.00716 (2017)

  33. Soltanolkotabi, M.: Structured signal recovery from quadratic measurements: breaking sample complexity barriers via nonconvex optimization. arXiv:1702.06175 (2017)

  34. Stöger, D., Geppert, J.A., Krahmer, F.: Sparse power factorization with refined peakiness conditions. In: IEEE Statistical Signal Processing Workshop 2018. IEEE (2018)

  35. Tillmann, A.M., Pfetsch, M.E.: The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inform. Theory 60(2), 1248–1259 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  36. Wang, T., Berthet, Q., Samworth, R.J.: Statistical and computational trade-offs in estimation of sparse principal components. Ann. Statist. 44(5), 1896–1930 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  37. Xu, G., Liu, H., Tong, L., Kailath, T.: A least-squares approach to blind channel identification. IEEE Trans. Signal Process. 43(12), 2982–2993 (1995)

    Article  Google Scholar 

Download references

Acknowledgements

The authors want to thank Yoram Bresler and Kiryung Lee for helpful discussions. Furthermore, we would like to thank the referees for the ir careful reading and their helpful suggestions, which improved the manuscript.

Author information

Authors and Affiliations

Authors

Additional information

Communicated by: Holger Rauhut

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Jakob Geppert is supported by the German Science Foundation (DFG) in the Collaborative Research Centre “SFB 755: Nanoscale Photonic Imaging” and partially in the framework of the Research Training Group “GRK 2088: Discovering Structure in Complex Data: Statistics meets Optimization and Inverse Problems.” Felix Krahmer and Dominik Stöger have been supported by the German Science Foundation (DFG) in the context of the joint project “SPP 1798: Bilinear Compressed Sensing” (KR 4512/2-1). The results of this paper have been presented in part at the 12th International Conference on Sampling Theory and Applications, July 3–7, 2017, Tallinn, Estonia [12], and in the IEEE Statistical Signal Processing Workshop 2018, June 10-13, Freiburg, Germany [34].

Appendix: Proof of Lemma 5

Appendix: Proof of Lemma 5

For the proof of Lemma 5, we will use the following result.

Lemma 1 (Lemma 17 and Lemma 18 in 23)

Assume that the (3s1, 3s2, 2)-restrictedisometry property is fulfilled for some restricted isometry constantδ > 0.Assume that the cardinality of\(\widetilde J_{1} \subseteq \left [ n_{1} \right ]\),respectively\(\widetilde J_{2} \subseteq \left [ n_{2}\right ]\)isat most 2s1,respectively 2s2.Then, whenever\(u\in \mathbb {C}^{n_{1}}\)isat most 2s1-sparseand\(v \in \mathbb {C}^{n_{2}}\)isat most 2s2-sparse,we have that

$$ \|{\Pi}_{\widetilde J_{1}}[(\mathcal{A}^{\ast}\mathcal{A}- I)(uv^{*})]{\Pi}_{\widetilde J_{2}}\| \leq \delta \|uv^{*}\|_{\mathrm{F}}. $$

Furthermore for all\( z \in \mathbb {C}^{n} \)and for all\(\widetilde J_{1} \subseteq \left [ n_{1} \right ]\), respectively\(\widetilde J_{2} \subseteq \left [ n_{2}\right ]\), with cardinality at mosts1, respectivelys2, we have that

$$ \|{\Pi}_{\widetilde J_{1}}[\mathcal{A}^{\ast}(z)]{\Pi}_{\widetilde J_{2}}\| \leq \sqrt{1+\delta}\|z\|_{\ell_{2}}. $$

Proof of Lemma 5

Recall that \(b = \mathcal {A}\left (X\right ) + z \) and define k1 and k2 by

$$ \begin{array}{llll} k_{1} &:= \underset{ k \in \left[ n_{2} \right] }{\text{arg max}} \ \vert v_{k} \vert\\ k_{2} & := \underset{ k \in \left[ n_{2} \right] }{\text{arg max}} \big\|{\Pi}_{\widehat J_{1}}[\mathcal{A}^{\ast}(b)]{\Pi}_{\left\{ k \right\} }\big\|_{\mathrm{F}}. \end{array} $$
(23)

The starting point of our proof is the observation that

$$ \big\|{\Pi}_{\widehat J_{1}}[\mathcal{A}^{\ast}(b)]{\Pi}_{\left\{ k_{2} \right\} }\big\|_{\mathrm{F}} \ge \big\|{\Pi}_{\widehat J_{1}}[\mathcal{A}^{\ast}(b)]{\Pi}_{\left\{ k_{1} \right\} }\big\|_{\mathrm{F}} \ge \big\|{\Pi}_{\widetilde{J_{1}}}[\mathcal{A}^{\ast}(b)]{\Pi}_{\left\{ k_{1} \right\} }\big\|_{\mathrm{F}}, $$
(24)

where the first inequality is due to the definition of k2 and the second one follows from \( \widetilde {J_{1}} \subset \widehat J_{1} \), which is due to Lemma 4. The right-hand side of the inequality chain can be estimated from below by

$$ \begin{array}{llll} &\big\|{\Pi}_{\widetilde{J_{1}}}[\mathcal{A}^{\ast}(b)]{\Pi}_{\left\{ k_{1} \right\} }\big\|_{\mathrm{F}}\\ \geq & \big\|{\Pi}_{\widetilde{J_{1}} } uv^{*} {\Pi}_{\left\{ k_{1} \right\} }\big\|_{\mathrm{F}} - \big\|{\Pi}_{\widetilde{J_{1}}} \left[ \left( \mathcal{A}^{*} \mathcal{A} - I \right) \left( uv^{*}\right) \right] {\Pi}_{\left\{ k_{1} \right\} }\big\|_{\mathrm{F}} - \big\|{\Pi}_{\widetilde{J_{1}} } \mathcal{A}^{*} \left( z\right) {\Pi}_{\left\{ k_{1} \right\} }\big\|_{\mathrm{F}} \\ \geq & \big\|{\Pi}_{\widetilde{J_{1}} } uv^{*} {\Pi}_{\left\{ k_{1} \right\} }\big\|_{\mathrm{F}} - \left( \delta \Vert uv^{*} \Vert_{F} + \sqrt{1 + \delta} \Vert z \Vert \right) \\ \ge & \big\|{\Pi}_{\widetilde{J_{1}} } u \big\| \Vert v \Vert_{\infty} - \left( \delta + \nu +\delta \nu \right). \end{array} $$
(25)

In the first inequality, we used \(b= \mathcal {A} \left (uv^{*}\right ) + z\) and the triangle inequality. The second inequality follows from Lemma 7. The last line follows from ∥uvF = 1 and ∥z∥ = ν. Next, we will estimate the left-hand side of (24) by

$$ \begin{array}{llll} &\big\|{\Pi}_{\widehat J_{1}}[\mathcal{A}^{\ast}(b)]{\Pi}_{\left\{ k_{2} \right\} }\big\|_{\mathrm{F}}\\ \leq & \big\|{\Pi}_{\widehat J_{1} } uv^{*} {\Pi}_{\left\{ k_{2} \right\} }\big\|_{\mathrm{F}} + \left( \delta \Vert uv^{*} \Vert_{F} + \sqrt{1 + \delta} \Vert z \Vert \right) \\ \le & \big\|{\Pi}_{\widehat J_{1} } u \big\| \big\| {\Pi}_{\left\{ k_{2} \right\} } v \big\| + \left( \delta + \nu +\delta \nu \right) \\ \le & \big\|{\Pi}_{\widehat J_{1} } u \big\| \big\| {\Pi}_{\widehat J_{2} } v \big\| + \left( \delta + \nu +\delta \nu \right). \end{array} $$
(26)

The first two lines are obtained by an analogous reasoning as for (25). The last line is due to \( \left \{ k_{2} \right \} \subset \widehat {J_{2}} \), which is a consequence of the definition of \( \widehat J_{2} \) (3) and the definition of {k2} (23). We finish the proof by combining the inequality chains (24), (25), and (26). □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Geppert, J., Krahmer, F. & Stöger, D. Sparse power factorization: balancing peakiness and sample complexity. Adv Comput Math 45, 1711–1728 (2019). https://doi.org/10.1007/s10444-019-09698-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-019-09698-6

Keywords

Mathematics Subject Classification (2010)

Navigation