Skip to main content
Log in

A novel robust digital image watermarking scheme based on attention U-Net++ structure

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

With the advancement of the internet, digital image watermarking techniques have found widespread application across various domains, including copyright protection and information security. However, traditional digital image watermarking techniques are susceptible to geometric distortions due to their limited feature extraction capabilities and reliance on manually designed watermark embedding algorithms. Recently, deep neural network-based digital watermarking has emerged as a promising approach due to its powerful nonlinear fitting ability, which has high robustness against various distortions, especially against geometric distortions. Most existing deep neural network-based digital watermarking frameworks employ U-Net style encoders, which may inadequately extract image features and exploit the correlation between secret messages and image pixels. Consequently, this results in a sub-optimal balance between visual quality and robustness. To overcome these limitations, a novel encoder called Attention U-Net++ that merges the advantages of U-Net++ and Attention U-Net is proposed. By incorporating the attention mechanism into the U-Net++ architecture, our proposed encoder effectively extracts image features and finds optimal pixel space for embedding messages, enhancing visual quality and robustness. Furthermore, a quadratic nonlinear growth loss weight setting based on the WGAN style discriminator is devised to enhance performance. Experimental results demonstrate that our proposed method achieves superior visual quality and robustness compared to the state-of-the-art schemes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

The authors confirm that the data supporting the findings of this study are available within the article.

References

  1. Tao, H., Chongmin, L., Zain, J.M., Abdalla, A.N.: Robust image watermarking theories and techniques: a review. J. Appl. Res. Technol. 12(1), 122–138 (2014)

    Article  Google Scholar 

  2. Ray, A., Roy, S.: Recent trends in image watermarking techniques for copyright protection: a survey. Int. J. Multimed. Inf. Retriev. 9(4), 249–270 (2020)

    Article  Google Scholar 

  3. Qin, J., Xiang, X., Wang, M.X.: A review on detection of LSB matching steganography. Inf. Technol. J. 9(8), 1725–1738 (2010)

    Article  Google Scholar 

  4. Kumar, S., Singh, B.K.: Entropy based spatial domain image watermarking and its performance analysis. Multimedia Tools Appl. 80(6), 9315–9331 (2021)

    Article  Google Scholar 

  5. Zhang, X., Su, Q.: A spatial domain-based color image blind watermarking scheme integrating multilevel discrete Hartley transform. Int. J. Intell. Syst. 36(8), 4321–4345 (2021)

    Article  Google Scholar 

  6. Sadreazami, H., Amini, M.: A robust image watermarking scheme using local statistical distribution in the contourlet domain. IEEE Trans. Circuits Syst. II Express Briefs 66(1), 151–155 (2018)

    Google Scholar 

  7. Wang, X.Y., Wen, T.T., Wang, L., Niu, P.P., Yang, H.Y.: Contourlet domain locally optimum image watermark decoder using Cauchy mixtures based vector HMT model. Signal Process. Image Commun. 88, 115972 (2020)

    Article  Google Scholar 

  8. Wang, X.Y., Zhang, S.Y., Wang, L., Yang, H.Y., Niu, P.P.: Locally optimum image watermark decoder by modeling NSCT domain difference coefficients with vector based Cauchy distribution. J. Vis. Commun. Image Represent. 62, 309–329 (2019)

    Article  Google Scholar 

  9. Liu, Y., Zhang, S., Yang, J.: Color image watermark decoder by modeling quaternion polar harmonic transform with BKF distribution. Signal Process. Image Commun. 88, 115946 (2020)

    Article  Google Scholar 

  10. Wang, X.Y., Tian, J., Tian, J.L., Niu, P.P., Yang, H.Y.: Statistical image watermarking using local RHFMs magnitudes and beta exponential distribution. J. Vis. Commun. Image Represent. 77, 103123 (2021)

    Article  Google Scholar 

  11. Wang, X.Y., Shen, X., Tian, J.L., Niu, P.P., Yang, H.Y.: Statistical image watermark decoder based on local frequency-domain Exponent-Fourier moments modeling. Multimedia Tools Appl. 80(18), 27717–27755 (2021)

    Article  Google Scholar 

  12. Kandi, H., Mishra, D., Gorthi, S.R.S.: Exploring the learning capabilities of convolutional neural networks for robust image watermarking. Comput. Secur. 65, 247–268 (2017)

    Article  Google Scholar 

  13. Zhu, J., Kaplan, R., Johnson, J., Fei-Fei, L. (2018).: Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV). 657–672 (2018)

  14. Tancik, M., Mildenhall, B., Ng, R.: Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR). 2117–2126 (2020)

  15. Jia, J., Gao, Z., Chen, K., Hu, M., Min, X., Zhai, G., Yang, X.: RIHOOP: robust invisible hyperlinks in offline and online photographs. IEEE Trans. Cybern. 52(7), 7094–7106 (2020)

    Article  Google Scholar 

  16. Zhang, K.A.., Cuesta-Infante, A.., Veeramachaneni, K.: Steganogan: pushing the limits of image steganography. arXiv preprint arXiv:1901.03892, 2 (2019)

  17. Wengrowski, E., Dana, K.: Light field messaging with deep photographic steganography. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1515–1524 (2019)

  18. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI, pp. 234–241 (2015)

  19. Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N., Liang, J.: Unet++: A nested u-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11 (2018)

  20. Oktay, O., Schlemper, J., Folgoc, L. L., Lee, M., Heinrich, M., Misawa, K., Rueckert, D.: Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. (2018)

  21. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223 (2017)

  22. Hosny, K.M., Darwish, M.M.: Robust color image watermarking using multiple fractional-order moments and chaotic map. Multimedia Tools Appl. 81(17), 24347–24375 (2022)

    Article  Google Scholar 

  23. Hosny, K.M., Darwish, M.M.: New geometrically invariant multiple zero-watermarking algorithm for color medical images. Biomed. Signal Process. Control 70, 103007 (2021)

    Article  Google Scholar 

  24. Magdy, M., Ghali, N.I., Ghoniemy, S., Hosny, K.M.: Multiple zero-watermarking of medical images for internet of medical things. IEEE Access 10, 38821–38831 (2022)

    Article  Google Scholar 

  25. Yuan, Z., Su, Q., Liu, D., Zhang, X.: A blind image watermarking scheme combining spatial domain and frequency domain. Vis. Comput. 37, 1867–1881 (2021)

    Article  Google Scholar 

  26. Khafaga, D.S., Karim, F.K., Darwish, M.M., Hosny, K.M.: Robust zero-watermarking of color medical images using multi-channel Gaussian–Hermite moments and 1D Chebyshev chaotic map. Sensors 22(15), 5612 (2022)

    Article  PubMed  PubMed Central  ADS  Google Scholar 

  27. Hosny, K.M., Magdi, A., Lashin, N.A., El-Komy, O., Salah, A.: Robust color image watermarking using multi-core Raspberry pi cluster. Multimedia Tools Appl. 81(12), 17185–17204 (2022)

    Article  Google Scholar 

  28. Hosny, K.M., Darwish, M.M., Fouda, M.M.: Robust color images watermarking using new fractional-order exponent moments. IEEE Access 9, 47425–47435 (2021)

    Article  Google Scholar 

  29. Vaidya, S.P.: Fingerprint-based robust medical image watermarking in hybrid transform. Vis. Comput. 39(6), 2245–2260 (2023)

    Article  PubMed  Google Scholar 

  30. Eltoukhy, M.M., Khedr, A.E., Abdel-Aziz, M.M., Hosny, K.M.: Robust watermarking method for securing color medical images using Slant-SVD-QFT transforms and OTP encryption. Alex. Eng. J. 78, 517–529 (2023)

    Article  Google Scholar 

  31. Xiang, S., Kim, H.J., Huang, J.: Invariant image watermarking based on statistical features in the low-frequency domain. IEEE Trans. Circuits Syst. Video Technol. 18(6), 777–790 (2008)

    Article  Google Scholar 

  32. Tian, H., Zhao, Y., Ni, R., Qin, L., Li, X.: LDFT-based watermarking resilient to local desynchronization attacks. IEEE Trans. Cybern. 43(6), 2190–2201 (2013)

    Article  PubMed  Google Scholar 

  33. Wang, X.Y., Shen, X., Tian, J.L., Niu, P.P., Yang, H.Y.: Statistical image watermark decoder using high-order difference coefficients and bounded generalized Gaussian mixtures-based HMT. Signal Process. 192, 108371 (2022)

    Article  Google Scholar 

  34. Zotin, A., Favorskaya, M., Proskurin, A., Pakhirka, A.: Study of digital textual watermarking distortions under Internet attacks in high resolution videos. Procedia Comput. Sci. 176, 1633–1642 (2020)

    Article  Google Scholar 

  35. Mun, S.M., Nam, S.H., Jang, H.U., Kim, D., Lee, H.K.: A robust blind watermarking using convolutional neural network. arXiv preprint arXiv:1704.03248 (2017)

  36. Luo, X., Zhan, R., Chang, H., Yang, F., Milanfar, P.: Distortion agnostic deep watermarking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13548–13557 (2020)

  37. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Bengio, Y.: Generative adversarial networks. Commun. ACM. 63(11), 139–144 (2020)

  38. Shi, H., Dong, J., Wang, W., Qian, Y., Zhang, X.: SSGAN: secure steganography based on generative adversarial networks. In: Pacific-Rim Conference on Multimedia, pp. 534–544 (2018)

  39. Thomas, E., Pawan, S.J., Kumar, S., Horo, A., Niyas, S., Vinayagamani, S., Rajan, J.: Multi-res-attention UNet: a CNN model for the segmentation of focal cortical dysplasia lesions from magnetic resonance images. IEEE J. Biomed. Health Inform. 25(5), 1724–1734 (2018)

    Article  Google Scholar 

  40. Jin, Q., Meng, Z., Sun, C., Cui, H., Su, R.: RA-UNet: ahybrid deep attention-aware network to extract liver and tumor in CT scans. Front. Bioeng. Biotechnol. 8, 1471 (2020)

    Article  Google Scholar 

  41. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 586–595 (2018)

  42. Kingma, D. P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. (2014)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 62062044, 61762054).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Zhao.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Table 

Table 11 The abbreviation of this paper

11.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, L., Zhao, Y., Fang, Y. et al. A novel robust digital image watermarking scheme based on attention U-Net++ structure. Vis Comput (2024). https://doi.org/10.1007/s00371-024-03271-z

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-024-03271-z

Keywords

Navigation