Advertisement

Faster and transferable deep learning steganalysis on GPU

  • Ye Dengpan
  • Jiang ShunzhiEmail author
  • Li Shiyu
  • Liu ChangRui
Special Issue Paper
  • 49 Downloads

Abstract

Steganalysis is an important and challenging problem in the area of multimedia forensics. Many deeper networks have been put forward to improve the performance of detecting steganographic traits. Existing methods focus on leveraging a more deeper structure. However, as the model deepens, gradient backpropagation cannot guarantee the ability to flow through the weights of every module so that it is difficult to learn, in addition, the depth of the structure will consume the computing resources on GPU. To reduce the computation and accelerate the training process, we propose a novel architecture which combines batch normalization with shallow layers. To reduce the loss of tiny information in steganalysis, we decrease the depth and increase the width of networks and abandon the max-pooling layers. To tackle the problem in which the training process is too long under different payloads, we propose two transfer learning schemes including parameters multiplexing and fine tuning to improve the overall efficiency. We demonstrate the effectiveness of our method on two steganographic algorithms WOW and S-UNIWARD. Compared with SRM and Ye.net, our model achieves better performance on the BOSSbase database and enhances the efficiency.

Keywords

Steganalysis Deep learning Transfer learning GPU 

Notes

Acknowledgements

This work was partially supported by the National Key Research Development Program of China (2016QY01W0200), the National Natural Science Foundation of China NSFC (U1636101, U1636219, U1736211).

References

  1. 1.
    Pevn, T., Filler, T., Bas, P.: Using high-dimensional image models to perform highly undetectable steganography. In: International Workshop on Information Hiding. Springer, Berlin (2010)Google Scholar
  2. 2.
    Holub, V., Fridrich, J.: Designing steganographic distortion using directional filters. In: 2012 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE (2012)Google Scholar
  3. 3.
    Holub, V., Fridrich, J., Denemark, T.: Universal distortion function for steganography in an arbitrary domain. EURASIP J. Inf. Secur. 2014.1, 1 (2014)Google Scholar
  4. 4.
    Li, B., et al.: A new cost function for spatial image steganography. In: 2014 IEEE International Conference on Image Processing (ICIP). IEEE (2014)Google Scholar
  5. 5.
    Pevny, T., Bas, P., Fridrich, J.: Steganalysis by subtractive pixel adjacency matrix. IEEE Trans. Inf. Forensics Secur. 5(2), 215–224 (2010)CrossRefGoogle Scholar
  6. 6.
    Kodovsk, J., Pevn, T., Fridrich, J.: Modern steganalysis can detect YASS. In: Media Forensics and Security II, vol. 7541. International Society for Optics and Photonics (2010)Google Scholar
  7. 7.
    Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 7(3), 868–882 (2012)CrossRefGoogle Scholar
  8. 8.
    Xia, C., et al.: Highly accurate real-time image steganalysis based on GPU. J. Real-Time Image Process. 14(1), 223–236 (2018)CrossRefGoogle Scholar
  9. 9.
    Tan, S., Li, B.: Stacked convolutional auto-encoders for steganalysis of digital images. In: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE (2014)Google Scholar
  10. 10.
    Li, S., et al.: Attack on deep steganalysis neural networks. In: International Conference on Cloud Computing and Security. Springer, Cham (2018)Google Scholar
  11. 11.
    Luo, X., et al.: Steganalysis of HUGO steganography based on parameter recognition of syndrome-trellis-codes. Multimed. Tools Appl. 75(21), 13557–13583 (2016)CrossRefGoogle Scholar
  12. 12.
    Ma, Y., et al.: Selection of rich model steganalysis features based on decision rough set α-positive region reduction. IEEE Trans. Circuits Syst. Video Technol. 29(2), 336–350 (2019)CrossRefGoogle Scholar
  13. 13.
    Qian, Y., et al.: Learning and transferring representations for image steganalysis using convolutional neural network. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE (2016)Google Scholar
  14. 14.
    Ye, J., Ni, J., Yi, Y.: Deep learning hierarchical representations for image steganalysis. IEEE Trans. Inf. Forensics Secu. 12(11), 2545–2557 (2017)CrossRefGoogle Scholar
  15. 15.
    Xu, G.: Deep convolutional neural network to detect J-UNIWARD. In: Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security. ACM (2017)Google Scholar
  16. 16.
    He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  17. 17.
    Boureau, Y.-L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10) (2010)Google Scholar
  18. 18.
    Hinton, G.E., et al.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint. arXiv:1207.0580 (2012)
  19. 19.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)Google Scholar
  20. 20.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708 (2017)Google Scholar
  21. 21.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International conference on machine Learning, PMLR, vol. 37, pp. 448–456 (2015)Google Scholar
  22. 22.
    Bas, P., Filler, T., Pevn, T.: Break our steganographic system: the ins and outs of organizing BOSS. In: International Workshop on Information Hiding. Springer, Berlin (2011)Google Scholar
  23. 23.
    Filler, T., Judas, J., Fridrich, J.: Minimizing embedding impact in steganography using trellis-coded quantization. In: Media Forensics and Security II, vol. 7541. International Society for Optics and Photonics (2010)Google Scholar
  24. 24.
    Pibre, L., et al.: Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source mismatch. Electron. Imaging 2016(8), 1–11 (2016)CrossRefGoogle Scholar
  25. 25.
    Salomon, M., et al.: Steganalysis via a convolutional neural network using large convolution filters for embedding process with same stego key: a deep learning approach for telemedicine. Eur. Res. Telemed. 6(2), 79–92 (2017)CrossRefGoogle Scholar
  26. 26.
    Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al.: Tensorflow: a system for large-scale machine learning. In: 12th USENIX symposium on operating systems design and implementation (OSDI 16), pp. 265–283 (2016)Google Scholar
  27. 27.
    Amari, S.-I.: Natural gradient works efficiently in learning. Neural Comput. 10(2), 251–276 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  • Ye Dengpan
    • 1
  • Jiang Shunzhi
    • 1
    Email author
  • Li Shiyu
    • 1
  • Liu ChangRui
    • 1
  1. 1.Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and EngineeringWuhan UniversityWuhanChina

Personalised recommendations