Skip to main content

Deep Networks for Tensor Approximation

  • Chapter
  • First Online:
Tensor Computation for Data Analysis
  • 2514 Accesses

Abstract

Benefiting from the powerful fitting ability, deep learning has shown outstanding results in some fields compared with other machine learning methods, such as computer vision and natural language processing. Driven by the emergency of a large amount of multidimensional data, many efforts are put to employ deep learning methods to solve tensor-based machine learning for data processing, such as compressive sensing and tensor completion.

In this chapter, we illustrate the motivations, fundamentals, popular algorithms, and applications of deep learning-based tensor approximation methods. The popular methods are categorized into three parts: classical deep learning, deep unrolling, and plug-and-play (PnP). Classical deep neural networks are composed of stacked nonlinear layers and map the input to the output directly. Deep unrolling maps model-based methods onto step-fixed deep neural networks to make the networks interpretable. Deep PnP solves a specific subproblem in most model-based methods using pre-trained deep networks by regarding it as a denoising problem. Finally, three applications are discussed to demonstrate the effectiveness of deep learning networks in multiway data-related applications, i.e., convolutional neural network tensor rank approximation, deep unrolling for snapshot compressive imaging, and deep PnP for tensor completion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Borgerding, M., Schniter, P., Rangan, S.: AMP-Inspired deep networks for sparse linear inverse problems. IEEE Trans. Signal Process. 65(16), 4293–4308 (2017)

    Article  MathSciNet  Google Scholar 

  2. Chan, S.H., Wang, X., Elgendy, O.A.: Plug-and-play ADMM for image restoration: fixed-point convergence and applications. IEEE Trans. Comput. Imag. 3(1), 84–98 (2016)

    Article  MathSciNet  Google Scholar 

  3. Che, M., Cichocki, A., Wei, Y.: Neural networks for computing best rank-one approximations of tensors and its applications. Neurocomputing 267, 114–133 (2017)

    Article  Google Scholar 

  4. Diamond, S., Sitzmann, V., Heide, F., Wetzstein, G.: Unrolled optimization with deep priors (2017, e-prints). arXiv–1705

    Google Scholar 

  5. Dong, W., Wang, P., Yin, W., Shi, G., Wu, F., Lu, X.: Denoising prior driven deep neural network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 41(10), 2305–2318 (2018)

    Article  Google Scholar 

  6. Donoho, D.L., Maleki, A., Montanari, A.: Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. 106(45), 18914–18919 (2009)

    Article  Google Scholar 

  7. Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep Learning, vol. 1. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  8. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, pp. 399–406 (2010)

    Google Scholar 

  9. Han, X., Wu, B., Shou, Z., Liu, X.Y., Zhang, Y., Kong, L.: Tensor FISTA-Net for real-time snapshot compressive imaging. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 10933–10940 (2020)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  11. Heide, F., Steinberger, M., Tsai, Y.T., Rouf, M., Pajak, D., Reddy, D., Gallo, O., Liu, J., Heidrich, W., Egiazarian, K., et al.: Flexisp: a flexible camera image processing framework. ACM Trans. Graph. 33(6), 1–13 (2014)

    Article  Google Scholar 

  12. Huang, Y., Würfl, T., Breininger, K., Liu, L., Lauritsch, G., Maier, A.: Some investigations on robustness of deep learning in limited angle tomography. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 145–153. Springer, Berlin (2018)

    Google Scholar 

  13. Iliadis, M., Spinoulas, L., Katsaggelos, A.K.: Deep fully-connected networks for video compressive sensing. Digital Signal Process. 72, 9–18 (2018)

    Article  Google Scholar 

  14. Jiang, F., Liu, X.Y., Lu, H., Shen, R.: Anisotropic total variation regularized low-rank tensor completion based on tensor nuclear norm for color image inpainting. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1363–1367. IEEE, Piscataway (2018)

    Google Scholar 

  15. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, May 7–9, 2015, Conference Track Proceedings (2015)

    Google Scholar 

  16. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  Google Scholar 

  17. Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  18. Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: Face recognition: a convolutional neural-network approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997)

    Article  Google Scholar 

  19. LeCun, Y.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, pp. 255–258 (1995)

    Google Scholar 

  20. LeCun, Y., Boser, B.E., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W.E., Jackel, L.D.: Handwritten digit recognition with a back-propagation network. In: Advances in Neural Information Processing Systems, pp. 396–404 (1990)

    Google Scholar 

  21. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2012)

    Article  Google Scholar 

  22. Liu, Y., De Vos, M., Van Huffel, S.: Compressed sensing of multichannel EEG signals: the simultaneous cosparsity and low-rank optimization. IEEE Trans. Biomed. Eng. 62(8), 2055–2061 (2015)

    Article  Google Scholar 

  23. Liu, B., Xu, Z., Li, Y.: Tensor decomposition via variational auto-encoder (2016, e-prints). arXiv–1611

    Google Scholar 

  24. Liu, Y., Yuan, X., Suo, J., Brady, D.J., Dai, Q.: Rank minimization for snapshot compressive imaging. IEEE Trans. Pattern Anal. Mach. Intell. 41(12), 2990–3006 (2018)

    Article  Google Scholar 

  25. Liu, Y., Long, Z., Huang, H., Zhu, C.: Low CP rank and tucker rank tensor completion for estimating missing components in image data. IEEE Trans. Circuits Syst. Video Technol. 30(4), 944–954 (2019)

    Article  Google Scholar 

  26. Long, Z., Liu, Y., Chen, L., Zhu, C.: Low rank tensor completion for multiway visual data. Signal Process. 155, 301–316 (2019)

    Article  Google Scholar 

  27. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis: exact recovery of corrupted low-rank tensors via convex optimization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5249–5257 (2016)

    Google Scholar 

  28. Ma, J., Liu, X.Y., Shou, Z., Yuan, X.: Deep tensor admm-net for snapshot compressive imaging. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 10223–10232 (2019)

    Google Scholar 

  29. Meinhardt, T., Moller, M., Hazirbas, C., Cremers, D.: Learning proximal operators: using denoising networks for regularizing inverse imaging problems. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1781–1790 (2017)

    Google Scholar 

  30. Metzler, C., Mousavi, A., Baraniuk, R.: Learned D-AMP: principled neural network based compressive image recovery. In: Advances in Neural Information Processing Systems, pp. 1772–1783 (2017)

    Google Scholar 

  31. Mousavi, A., Patel, A.B., Baraniuk, R.G.: A deep learning approach to structured signal recovery. In: The 53rd Annual Allerton Conference on Communication, Control, and Computing, pp. 1336–1343. IEEE, Piscataway (2015)

    Google Scholar 

  32. Sun, J., Li, H., Xu, Z., et al.: Deep ADMM-Net for compressive sensing MRI. In: Advances in Neural Information Processing Systems, pp. 10–18 (2016)

    Google Scholar 

  33. Tan, X., Zhang, Y., Tang, S., Shao, J., Wu, F., Zhuang, Y.: Logistic tensor regression for classification. In: International Conference on Intelligent Science and Intelligent Data Engineering, pp. 573–581. Springer, Berlin (2012)

    Google Scholar 

  34. Wang, Z., Ling, Q., Huang, T.: Learning deep l0 encoders. In: AAAI Conference on Artificial Intelligence, pp. 2194–2200 (2016)

    Google Scholar 

  35. Wang, Z., Yang, Y., Chang, S., Ling, Q., Huang, T.S.: Learning a deep ℓ ∞ encoder for hashing. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 2174–2180. AAAI Press, Palo Alto (2016)

    Google Scholar 

  36. Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P.L., Ye, X., Liu, F., Arridge, S., Keegan, J., Guo, Y., et al.: DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans. Med. Imag. 37(6), 1310–1321 (2017)

    Article  Google Scholar 

  37. Yin, M., Gao, J., Xie, S., Guo, Y.: Multiview subspace clustering via tensorial t-product representation. IEEE Trans. Neural Netw. Learn. Syst. 30(3), 851–864 (2018)

    Article  MathSciNet  Google Scholar 

  38. Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 13(3), 55–75 (2018)

    Article  Google Scholar 

  39. Yuan, X.: Generalized alternating projection based total variation minimization for compressive sensing. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 2539–2543. IEEE, Piscataway (2016)

    Google Scholar 

  40. Yuan, X., Liu, Y., Suo, J., Dai, Q.: Plug-and-play algorithms for large-scale snapshot compressive imaging. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1447–1457 (2020)

    Google Scholar 

  41. Zhang, J., Ghanem, B.: ISTA-Net: interpretable optimization-inspired deep network for image compressive sensing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1828–1837 (2018)

    Google Scholar 

  42. Zhang, Z., Ely, G., Aeron, S., Hao, N., Kilmer, M.: Novel methods for multilinear data completion and de-noising based on tensor-SVD. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3842–3849 (2014)

    Google Scholar 

  43. Zhang, K., Zuo, W., Zhang, L.: FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27(9), 4608–4622 (2018)

    Article  MathSciNet  Google Scholar 

  44. Zhang, Z., Liu, Y., Liu, J., Wen, F., Zhu, C.: AMP-Net: denoising based deep unfolding for compressive image sensing. IEEE Trans. Image Process. 30, 1487–1500 (2020)

    Article  MathSciNet  Google Scholar 

  45. Zhao, X.L., Xu, W.H., Jiang, T.X., Wang, Y., Ng, M.K.: Deep plug-and-play prior for low-rank tensor completion. Neurocomputing 400, 137–149 (2020)

    Article  Google Scholar 

  46. Zhou, H., Li, L., Zhu, H.: Tensor regression with applications in neuroimaging data analysis. J. Am. Stat. Assoc. 108(502), 540–552 (2013)

    Article  MathSciNet  Google Scholar 

  47. Zhou, M., Liu, Y., Long, Z., Chen, L., Zhu, C.: Tensor rank learning in CP decomposition via convolutional neural network. Signal Process. Image Commun. 73, 12–21 (2019)

    Article  Google Scholar 

  48. Zubair, S., Wang, W.: Tensor dictionary learning with sparse tucker decomposition. In: 2013 18th International Conference on Digital Signal Processing (DSP), pp. 1–6. IEEE, Piscataway (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Liu, Y., Liu, J., Long, Z., Zhu, C. (2022). Deep Networks for Tensor Approximation. In: Tensor Computation for Data Analysis. Springer, Cham. https://doi.org/10.1007/978-3-030-74386-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-74386-4_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-74385-7

  • Online ISBN: 978-3-030-74386-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics