Skip to main content
Log in

Just Least Squares: Binary Compressive Sampling with Low Generative Intrinsic Dimension

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

A Correction to this article was published on 06 July 2023

This article has been updated

Abstract

In this paper, we consider recovering \(n-\)dimensional signals from m binary measurements corrupted by noises and sign flips under the assumption that the target signals have low generative intrinsic dimension, i.e., the target signals can be approximately generated via an L-Lipschitz generator \(G: \mathbb {R}^k\rightarrow \mathbb {R}^{n}, k\ll n\). Although the binary measurements model is highly nonlinear, we propose a least square decoder and prove that, up to a constant c, with high probability, the least square decoder achieves a sharp estimation error \(C\sqrt{\frac{k\log (Ln)}{m}}\) as long as \(m\ge C( k\log (Ln))\). Extensive numerical simulations and comparisons with state-of-the-art methods demonstrated the least square decoder is robust to noise and sign flips, as indicated by our theory. By constructing a ReLU network with properly chosen depth and width, we verify the (approximately) deep generative prior, which is of independent interest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

The datasets analysed during the current study are available in the [30].

Change history

Notes

  1. We use the pre-trained generative model of Bora et al. [4] available at https://github.com/AshishBora/csgm.

References

  1. Ahsen, M.E., Vidyasagar, M.: An approach to one-bit compressed sensing based on probably approximately correct learning theory. J. Mach. Learn. Res. 20(1), 408–430 (2019)

    MathSciNet  MATH  Google Scholar 

  2. Bartlett, P.L., Harvey, N., Liaw, C., Mehrabian, A.: Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. J. Mach. Learn. Res. 20, 2285–2301 (2019)

    MathSciNet  MATH  Google Scholar 

  3. Bartlett, P.L., Maiorov, V., Meir, R.: Almost linear VC dimension bounds for piecewise polynomial networks. Adv. Neural Inf. Process. Syst. 11, 190–196 (1998)

  4. Bora, A., Jalal, A., Price, E., Dimakis, A. G.: Compressed sensing using generative models. In: International Conference on Machine Learning, pp. 537–546. PMLR, (2017)

  5. Boucheron, S., Lugosi, G., Massart, P.: Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, Oxford (2013)

    Book  MATH  Google Scholar 

  6. Boufounos, P. T.: Greedy sparse signal reconstruction from sign measurements. In: Signals, Systems and Computers, 2009 Conference Record of the Forty-Third Asilomar Conference, pp. 1305–1309. IEEE (2009)

  7. Boufounos, P. T., Baraniuk, R. G.: 1-bit compressive sensing. In: Information Sciences and Systems, 2008. CISS 2008. 42nd Annual Conference, pp. 16–21. IEEE (2008)

  8. Brillinger, D. R.: A generalized linear model with gaussian regressor variables. A Festschrift For Erich L. Lehmann, pp. 97 (1982)

  9. Candés, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dai, D.-Q., Shen, L., Yuesheng, X., Zhang, N.: Noisy 1-bit compressive sensing: models and algorithms. Appl. Comput. Harmon. Anal. 40(1), 1–32 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Ding, Z., Huang, J., Jiao, Y., Lu, X., Yang, Z.: Robust decoding from binary measurements with cardinality constraint least squares. arXiv preprint arXiv:2006.02890 (2020)

  12. Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Falconer, K.: Fractal Geometry: Mathematical Foundations and Applications. John Wiley & Sons, New York (2004)

    MATH  Google Scholar 

  14. Fazel, M., Candes, E., Recht, B., Parrilo, P.: Compressed sensing and robust recovery of low rank matrices. In: Signals, Systems and Computers, 2008 42nd Asilomar Conference, pp. 1043–1047. IEEE (2008)

  15. Foucart, S., Rauhut, H.: A mathematical introduction to compressive sensing, vol. 1. Birkhäuser Basel (2013)

  16. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems 27, pp. 2672–2680. Curran Associates Inc., (2014)

  17. Gopi, S., Netrapalli, P., Jain, P., Nori, A.: One-bit compressed sensing: provable support and vector recovery. In: International Conference on Machine Learning, pp. 154–162. (2013)

  18. Gupta, A., Nowak, R., Recht, B.: Sample complexity for 1-bit compressed sensing and sparse classification. In: Information Theory Proceedings (ISIT), 2010 IEEE International Symposium, pp. 1553–1557. IEEE (2010)

  19. Hand, P., Leong, O., Voroninski, V.: Phase retrieval under a generative prior. Adv. Neural Inf. Process. Syst. 31, 9136–9146 (2018)

  20. Haupt, J., Baraniuk, R.: Robust support recovery using sparse compressive sensing matrices. In: Information Sciences and Systems (CISS), 2011 45th Annual Conference, pp. 1–6. IEEE (2011)

  21. Huang, J., Jiao, Y., Li, Z., Liu, S., Wang, Y., Yang, Y.: An error analysis of generative adversarial networks for learning distributions. J. Mach. Learn. Res. 23, 1–43 (2022)

  22. Huang, J., Jiao, Y., Xiliang, L., Zhu, L.: Robust decoding from 1-bit compressive sampling with ordinary and regularized least squares. SIAM J. Sci. Comput. 40(4), A2062–A2086 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  23. Huang, W., Hand, P., Heckel, R., Voroninski, V.: A provably convergent scheme for compressive sensing under random generative priors. J. Fourier Anal. Appl. 27, 1–34 (2021)

  24. Huang, X., Shi, L., Yan, M., Suykens, J. A. K.: Pinball loss minimization for one-bit compressive sensing. Neurocomputing 314, 275–283 (2018)

  25. Jacques, L., Degraux, K., De Vleeschouwer, C.: Quantized iterative hard thresholding: bridging 1-bit and high-resolution quantized compressed sensing. arXiv preprint arXiv:1305.1786 (2013)

  26. Jacques, L., Laska, J.N., Boufounos, P.T., Baraniuk, R.G.: Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. Kingma, D. P., Welling, M.: Auto-encoding variational bayes. In: ICLR, (2014)

  28. Laska, J.N., Baraniuk, R.G.: Regime change: bit-depth versus measurement-rate in compressive sensing. IEEE Trans. Signal Process. 60(7), 3496–3505 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Laska, J.N., Wen, Z., Yin, W., Baraniuk, R.G.: Trust, but verify: fast and accurate signal recovery from 1-bit compressive measurements. IEEE Trans. Signal Process. 59(11), 5289–5301 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  31. Li, K.C., Duan, N.: Regression analysis under link violation. Ann. Stat. 17(3), 1009–1052 (1989)

  32. Liu, W., Gong, D., Zhiqiang, X.: One-bit compressed sensing by greedy algorithms. Numer. Math. Theory Methods Appl. 9(2), 169–184 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Liu, Z., Gomes, S., Tiwari, A., Scarlett, J.: Sample complexity bounds for 1-bit compressive sensing and binary stable embeddings with generative priors. In: International Conference on Machine Learning, pp. 6216–6225. PMLR, (2020)

  34. Liu, Z., Scarlett, J.: The generalized lasso with nonlinear observations and generative priors. Adv. Neural Inf. Process. Syst. 33, 19125–19136 (2020)

    Google Scholar 

  35. Liu, Z., Scarlett, J.: Information-theoretic lower bounds for compressive sensing with generative models. IEEE J. Sel. Areas Inf. Theory 1(1), 292–303 (2020)

    Article  Google Scholar 

  36. Mallat, S.: A Wavelet Tour of Signal Processing: The Sparse Way. Academic Press, Cambridge (2008)

    MATH  Google Scholar 

  37. Neykov, M., Liu, J.S., Cai, T.: L1-regularized least squares for support recovery of high dimensional single index models with Gaussian designs. J. Mach. Learn. Res. 17(1), 2976–3012 (2016)

    MATH  Google Scholar 

  38. Plan, Y., Vershynin, R.: One-bit compressed sensing by linear programming. Commun. Pure Appl. Math. 66(8), 1275–1297 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  39. Plan, Y., Vershynin, R.: Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inf. Theory 59(1), 482–494 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  40. Plan, Y., Vershynin, R.: The generalized lasso with non-linear observations. IEEE Trans. Inf. Theory 62(3), 1528–1537 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  41. Plan, Y., Vershynin, R., Yudovina, E.: High-dimensional estimation with geometric constraints. Inf. Inference J. IMA 6(1), 1–40 (2017)

    MathSciNet  MATH  Google Scholar 

  42. Qiu, S., Wei, X., Yang, Z.: Robust one-bit recovery via relu generative networks: near-optimal statistical rate and global landscape analysis. In: International Conference on Machine Learning, pp. 7857–7866. PMLR, (2020)

  43. Jimenez Rezende, D., Mohamed, S: Variational inference with normalizing flows. In: ICML, (2015)

  44. Sayood, K.: Introduction to Data Compression. Morgan Kaufmann, Burlington (2017)

    MATH  Google Scholar 

  45. Shen, Z., Yang, H., Zhang, S.: Nonlinear approximation via compositions. Neural Netw. 119, 74–84 (2019)

    Article  MATH  Google Scholar 

  46. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9446–9454, (2018)

  47. Vardi, G., Yehudai, G., Shamir, O.: On the optimal memorization power of relu neural networks. In: International Conference on Learning Representations (2022). https://openreview.net/forum?id=MkTPtnjeYTV

  48. Vershynin, R.: Estimation in high dimensions: a geometric perspective. In: Pfander, G.E. (ed.) Sampling Theory, a Renaissance: Compressive Sensing and Other Developments, pp. 3–66. Springer International Publishing, Cham (2015)

  49. Vershynin, R.: High-Dimensional Probability: An Introduction with Applications in Data Science, vol. 47. Cambridge University Press, Cambridge (2018)

    MATH  Google Scholar 

  50. Vershynin, R.: Memory capacity of neural networks with threshold and rectified linear unit activations. SIAM J. Math. Data Sci. 2(4), 1004–1033 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  51. Wei, X., Yang, Z., Wang, Z.: On the statistical rate of nonlinear recovery in generative models with heavy-tailed data. In: International Conference on Machine Learning, pp. 6697–6706. PMLR, (2019)

  52. Wu, Y., Rosca, M., Lillicrap, T.: Deep compressed sensing. In: International Conference on Machine Learning, pp. 6850–6860. PMLR, (2019)

  53. Yan, M., Yang, Y., Osher, S.: Robust 1-bit compressive sensing using adaptive outlier pursuit. IEEE Trans. Signal Process. 60(7), 3868–3875 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  54. Zhang, L., Yi, J., Jin, R.: Efficient algorithms for robust one-bit compressive sensing. In: International Conference on Machine Learning, pp. 820–828, (2014)

  55. Zymnis, A., Boyd, S., Candes, E.: Compressed sensing with quantized measurements. IEEE Signal Process. Lett. 17(2), 149–152 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

We would like the thank the anonymous referees and the associated editor for their useful comments and suggestions, which have led to considerable improvements in the paper.

Funding

This work is supported by the National Key Research and Development Program of China (No. 2020YFA0714200), the National Science Foundation of China (No. 11871474, 11871385) and by the research fund of KLATASDSMOE of China.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Theoretical analysis were performed by Yuling Jiao, Min Liu and Xiliang Lu, numerical test were performed by Dingwei Li, Min Liu and Yuanyuan Yang. The first draft of the manuscript was written by Min Liu and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xiliang Lu.

Ethics declarations

Competing interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiao, Y., Li, D., Liu, M. et al. Just Least Squares: Binary Compressive Sampling with Low Generative Intrinsic Dimension. J Sci Comput 95, 28 (2023). https://doi.org/10.1007/s10915-023-02158-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-023-02158-w

Keywords

Navigation