Abstract
Disadvantages of deep convolutional neural networks (CNNs) in image super-resolution (SR) are slow convergence rate, long training time, slow run time, complexity and high computational cost. On the other hand, shallow CNNs, while have lack of above-mentioned disadvantages for deep CNNs, do not have ability of learning the high frequency components appropriately. The purpose of this paper is promoting shallow networks so as to increase learning ability for fine details of the image while keeping the previous advantages. We present a learning-based approach which is composition of a feature extraction transformation unit with a training and approximation unit (shallow CNNs). In the transformation unit, to extract fine details of the image, nonsubsampled contourlet transform (NSCT) is used. Transforming the feature space to frequency domain causes that the output coefficients to have enough sparse so that by increasing the sparse coefficients, training of neural network will be easier and the training time will decrease considerably. Advantages of the proposed approach with respect to other approaches are high reconstruction accuracy, high training speed, low runtime, lower network complexity and faster convergence.
Similar content being viewed by others
REFERENCES
H. Chang, D. Y. Yeung, and Y. Xiong, in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit. (CVPR), Washington, DC, USA, June 27–July 2, 2004 (IEEE, New York, 2004), pp. 1–8.
J. Yang, J. Wright, T. S. Huang, and Y. Ma, IEEE Trans. Image Process. 19, 2861 (2010).
R. Zeyde, M. Elad, and M. Protter, in Proc. Int. Conf. on Curves and Surfaces, Avignon France, June 24–30, 2010, pp. 711–730.
K. I. Kim and Y. Kwon, IEEE Trans. Pattern Anal. Mach. Intell. 32, 1127 (2010).
T. Peleg and M. Elad, IEEE Trans. Image Process. 23, 2569 (2014).
J. Ahmed and M. A. Shah, EURASIP J. Image and Video Process. 36, 1 (2016).
S. Huang, J. Sun, Y. Yang, Y. Fang, P. Lin, and Y. Que, IEEE Trans. Image Process. 27, 2650 (2018).
W. T. Freeman, T. R. Jones, and E. C. Pasztor, IEEE Comput. Graph. Appl. 22, 56 (2002).
Y. Zhang, Y. Zhang, J. Zhang, and Q. Dai, IEEE Trans. Multimedia 18, 405 (2016).
R. Timofte, V. De, and L.V. Gool, in Proc. IEEE Int. Conf. Comput. Vision, Australia, 2013, pp. 1920–1927.
R. Timofte, V. De, and L.V. Gool, in Proc. Asian Conf. on Comput. Vision ACCV 2014, Singapore , Nov. 1–5, 2014 (ACCV, 2014), pp. 111–126.
S. Schulter, C. Leistner, and H. Bischof, in Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), Boston, Mass, USA, June 7–12, 2015, pp. 3791–3799.
E. Pérez-Pellitero, J. Salvador, J. Ruiz-Hidalgo, and B. Rosenhahn, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR 2016), Las Vegas, NV, USA, June 27–30, 2016 (IEEE, New York, 2016), pp. 1837–1845.
Ch. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, in Proc. IEEE Conf. Comput. Vision Pattern Recognit., (CVPR 2017), Honolulu, HI, USA, July 21–26, 2017, pp. 4681–4690.
C. Dong, C. C. Loy, K. He, and X. Tang, in Proc. Eur. Conf. on Computer Vision, Zurich, Switzerland, Sept. 2014, pp. 184–199.
C. Dong, C. C. Loy, K. He, and X. Tang, IEEE Trans. Pattern Anal. Mach. Intell. 38, 295 (2016).
C. Dong, C. C. Loy, and X. Tang, in Proc. Eur. Conf. on Computer Vision, Amsterdam, The Netherlands, 2016, pp. 391–407.
D. Liu, Z. Wang, B. Wen, J. Yang, W. Han, and T. S. Huang, IEEE Trans. Image Process. 25, 3194 (2016).
N. Kumar, R. Verma, and A. Sethi, Pattern Recognit. Lett. 90, 65 (2017).
N. Kumar and A. Sethi, IEEE Trans. Multimedia 20, 298 (2018).
T. Wang, W. Sun, H. Qi, and P. Ren, IEEE Geosci. Remote. Sens. Lett. 15, 769 (2018).
M. Kawulok, P. Benecki, S. Piechaczek, K. Hrynczenko, D. Kostrzewa, and J. Nalepa, IEEE Geosci. Remote. Sens. Lett. 17, 1062 (2020).
S. Mei, R. Jiang, X. Li, and Q. Du, IEEE Trans. Geosci. Remote Sens. 58, 4590 (2020).
M. Imani, Neurocomputing 398, 117 (2020).
J. Kim, J. K. Lee, and K. M. Lee, in Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), USA, 2016, (IEEE, New York, 2016), pp. 1646–1654.
T. Guo, H. S. Mousavi, T. H. Vu, and V. Monga, in Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR) Workshops, USA, 2017 (IEEE, New York, 2017), pp. 104–113.
A. Esmaeilzehi, M. O. Ahmad, and M. N. S. Swamy, IEEE Access 6, 59963 (2018).
P. Liu, Y. Hong, and Y. Liu, IEEE Access 7, 37555 (2019).
A. L. D. Cunha, J. Zhou, and M. N. Do, IEEE Trans. Image Process. 15, 3089 (2006).
H. Zhang, P. Wang, C. Zhang, and Z. Jiang, Sensors 19, 3234 (2019).
P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, IEEE Trans. Pattern Anal. Mach. Intell. 33, 898 (2011).
M. Bevilacqua, A. Roumy, C. Guillemot, and M. A. Morel, in Proc. British Mach. Vision Conf. (BMVC 2012), Surrey, Sept. 3–7, 2012 (BMVC 2012), pp. 1–10.
R. Zeyde, M. Elad, and M. Protter, in Proc. 7th Int. Conf. Curves Surfaces, 2012, pp. 711–730.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors declare that they have no conflicts of interest.
Rights and permissions
About this article
Cite this article
Farajzadeh, A., Mohamadi, S. & Imani, M. High Performance Image Super-Resolution Using Convolutional Neural Networks and Nonsubsampled Contourlet Transform. J. Commun. Technol. Electron. 67, 418–429 (2022). https://doi.org/10.1134/S1064226922040040
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S1064226922040040