R. J. Duffin and A. C. Schaeffer, “A class of nonharmonic Fourier series,” Transactions of the American Mathematical Society, vol. 72, no. 2, pp. 341–366, 1952.
MathSciNet
CrossRef
Google Scholar
B. Schölkopf, R. Herbrich, and A. J. Smola, “A generalized representer theorem,” in International conference on computational learning theory. Springer, 2001, pp. 416–426.
Google Scholar
J. C. Ye and W. K. Sung, “Understanding geometry of encoder-decoder CNNs,” in International Conference on Machine Learning, 2019, pp. 7064–7073.
Google Scholar
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
Google Scholar
J. C. Ye, Y. Han, and E. Cha, “Deep convolutional framelets: A general deep learning framework for inverse problems,” SIAM Journal on Imaging Sciences, vol. 11, no. 2, pp. 991–1048, 2018.
MathSciNet
CrossRef
Google Scholar
D. L. Donoho, “Compressed sensing,” IEEE Trans. Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
MathSciNet
CrossRef
Google Scholar
G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals and Systems, vol. 2, no. 4, pp. 303–314, 1989.
MathSciNet
CrossRef
Google Scholar
K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, 2017.
MathSciNet
CrossRef
Google Scholar
J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 341–349.
Google Scholar
C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2015.
CrossRef
Google Scholar
J. Kim, J. K. Lee, and K. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.
Google Scholar
M. Telgarsky, “Representation benefits of deep feedforward networks,” arXiv preprint arXiv:1509.08101, 2015.
Google Scholar
R. Eldan and O. Shamir, “The power of depth for feedforward neural networks,” in 29th Annual Conference on Learning Theory, 2016, pp. 907–940.
Google Scholar
M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. S. Dickstein, “On the expressive power of deep neural networks,” in Proceedings of the 34th International Conference on Machine Learning. JMLR, 2017, pp. 2847–2854.
Google Scholar
D. Yarotsky, “Error bounds for approximations with deep ReLU networks,” Neural Networks, vol. 94, pp. 103–114, 2017.
CrossRef
Google Scholar
R. Arora, A. Basu, P. Mianjy, and A. Mukherjee, “Understanding deep neural networks with rectified linear units,” arXiv preprint arXiv:1611.01491, 2016.
Google Scholar
S. Mallat, A wavelet tour of signal processing. Academic Press, 1999.
MATH
Google Scholar
D. L. Donoho, “De-noising by soft-thresholding,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 613–627, 1995.
MathSciNet
CrossRef
Google Scholar
Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,” IEEE Transactions on Information Theory, vol. 55, no. 11, pp. 5302–5316, 2009.
MathSciNet
CrossRef
Google Scholar
R. Yin, T. Gao, Y. M. Lu, and I. Daubechies, “A tale of two bases: Local-nonlocal regularization on image patches with convolution framelets,” SIAM Journal on Imaging Sciences, vol. 10, no. 2, pp. 711–750, 2017.
MathSciNet
CrossRef
Google Scholar
J. C. Ye, J. M. Kim, K. H. Jin, and K. Lee, “Compressive sampling using annihilating filter-based low-rank interpolation,” IEEE Transactions on Information Theory, vol. 63, no. 2, pp. 777–801, 2016.
MathSciNet
CrossRef
Google Scholar
K. H. Jin and J. C. Ye, “Annihilating filter-based low-rank Hankel matrix approach for image inpainting,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3498–3511, 2015.
MathSciNet
CrossRef
Google Scholar
K. H. Jin, D. Lee, and J. C. Ye, “A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank Hankel matrix,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 480–495, 2016.
MathSciNet
CrossRef
Google Scholar
J.-F. Cai, B. Dong, S. Osher, and Z. Shen, “Image restoration: total variation, wavelet frames, and beyond,” Journal of the American Mathematical Society, vol. 25, no. 4, pp. 1033–1089, 2012.
MathSciNet
CrossRef
Google Scholar
N. Lei, D. An, Y. Guo, K. Su, S. Liu, Z. Luo, S.-T. Yau, and X. Gu, “A geometric understanding of deep learning,” Engineering, 2020.
Google Scholar
B. Hanin and D. Rolnick, “Complexity of linear regions in deep networks,” in International Conference on Machine Learning. PMLR, 2019, pp. 2596–2604.
Google Scholar
B. Hanin and D. Rolnick. “Deep ReLU networks have surprisingly few activation patterns,” Advances in Neural Information Processing Systems, vol. 32, pp. 361–370, 2019.
Google Scholar
X. Zhang and D. Wu, “Empirical studies on the properties of linear regions in deep neural networks,” arXiv preprint arXiv:2001.01072, 2020.
Google Scholar
G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, “On the number of linear regions of deep neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 2924–2932.
Google Scholar