Advertisement

Residual CNN Image Compression

  • Kunal DeshmukhEmail author
  • Chris Pollett
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11845)

Abstract

We present a neural network architecture inspired by the end-to-end compression framework [1]. Our model consists of an image compression module, an arithmetic encoder, an arithmetic decoder, and an image reconstruction module. We evaluate the compression rate and the closeness of the reconstructed image to the original image for this model. Structural similarity metrics and peak signal to noise ratio are used to evaluate the image quality. We have also measured the net reduction in file size after compression and compared it with other lossy image compression techniques. Our architecture achieves better results in terms of these metrics compared to traditional and newly proposed image compression algorithms. In particular, an average PSNR of 28.48 and SSIM value of 0.86 are obtained as compared to 28.45 PSNR and 0.81 SSIM value in the previously mentioned network.

Keywords

Convolutional Neural Networks Generative adversarial networks Structural similarity metrics Peak signal to noise ratio 

References

  1. 1.
    Jiang, F., Tao, W., Liu, S., Ren, J., Guo, X., Zhao, D.: An end-to-end compression framework based on convolutional neural networks. IEEE Trans. Circuits Syst. Video Technol. 28(10), 3007–3018 (2018)CrossRefGoogle Scholar
  2. 2.
    Cavigelli, L., Hager, P., Benini, L.: CAS-CNN: a deep convolutional neural network for image compression artifact suppression. In: International Joint Conference on Neural Networks (IJCNN), pp. 752–759 (2017)Google Scholar
  3. 3.
    Zhai, G., Zhang, W., Yang, X., Lin, W., Xu, Y.: Efficient image deblocking based on postfiltering in shifted windows. IEEE Trans. Circuits Syst. Video Technol. 18, 122–126 (2008)Google Scholar
  4. 4.
    Foi, A., Katkovnik, V., Egiazarian, K.: Pointwise shape-adaptive DCT denoising with structure preservation in luminance-chrominance space. IEEE Trans. Image Process. 16, 1395–1411 (2006)CrossRefGoogle Scholar
  5. 5.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org
  6. 6.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)Google Scholar
  7. 7.
    Krizhevsky, A., Sutskever, I., Hinton, G.: Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556 (2014)
  8. 8.
    Knuth, D.: Dynamic huffman coding. J. Algorithms 6(2), 163–80 (1985)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift, arXiv preprint arXiv:1502.03167 (2015)
  10. 10.
    Kingma, D., Ba, J.: Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980, December 2014
  11. 11.
    Thimm, G., Fiesler, E.: Neural network initialization. In: International Workshop on Artificial Neural Networks, pp. 535–542 (1995)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.San Jose State UniversitySan JoseUSA

Personalised recommendations