1 Introduction

Image security has grown increasingly critical as information and communication technologies have simplified digital image transmission and transfer. The technology of digital image watermarking is an effective technique to protect privacy and intellectual property protection [26, 50], content authentication [5], ensuring owner identification [32, 34], and digital image validity [8, 16, 18, 21]. The goal of traditional watermarking algorithms is to create a protected image by inserting information of the watermark into the digital image [1, 12, 24, 31, 59]. As well as this, these algorithms have several drawbacks that are challenging for traditional watermarking methods, for example, embedded information contaminates the original image data, while traditional digital watermarking distorts the watermarked image. In some applications, image distortion is undesirable, such as medical diagnosis, artwork scanning, and military imaging systems. Moreover, due to its inherent contradictions, this strategy is challenging to balance in terms of robustness and imperceptibility.

Using zero-watermarking technology now increases copyright protection for digital multimedia information and visual quality, especially images, to solve the embedding watermarking problem. In the zero-watermarking approach [11, 28, 51, 56,57,58], the watermark sequence is logically associated with the original image instead of being physically embedded in it, which maintains the image's integrity. Therefore, it has a high level of imperceptibility. Zero-watermarking techniques have the following advantages: (1) Good imperceptibility due to these techniques preserves the quality of the original images without changing; (2) A proper balance of robustness, imperceptibility, and capacity; (3) Copyright authentication authority is involved in zero-watermarking techniques.

Zero-watermarking takes some intrinsic information from the host image instead of applying a sequence of watermarks. The watermark sequence of the owner connects these inherent properties to create a master share that is securely stored [35, 39, 60]. The owner can demonstrate ownership of the protected images using the zero watermark, which can be conveyed via any unsecured public communication channel. This is done by using the intrinsic features and the master share that were extracted from the analyzed image. Extraction of the host image's significant intrinsic features is the most significant problem for the zero-watermarking technique's desirable performance.

Based on the significant features of the image [19, 51], the approaches of zero-watermarking can be divided into four categories: features in the spatial domain based [2, 3], features in the frequency domain based [42, 43, 53], moments features based [10, 36], and CNN features based methods [9, 14].

In the first category, spatial domain features are directly used to obtain imagine features. However, when geometric and image processing attacks occur, spatial domain features exhibit great sensitivity regardless of the use of edge or texture information [2, 3]. The image features in the second category are constructed using frequency-domain features. The features in the frequency domain, on the other hand, suffer from a lack of invariance of rotation and scaling, resulting in poor performance [42, 43, 53]. Image features in the third category are generated using moments and moments invariants that have helpful invariance properties [10, 36]. In the fourth category, features for the color image are extracted from the CNN layers by merging a deep feature map to form the feature image [9, 14].

Sun et al. [40] presented an algorithm of zero-watermarking by using a combination of the quantization embedding role, the generalized Arnold transform, and the spread spectrum technique. Using orthogonal Fourier-Mellin moments (OFMMs), Shao et al. [37] introduced a robust double scheme of zero-watermarking to protect the copyright of two images simultaneously. Thanh et al. [41] introduced a robust algorithm of zero-watermarking by using a QR decomposition, and a visual map featuring their permutation features to reduce the computational cost and improve the robustness. Liu et al. [29] introduced a zero-watermarking technique that has higher robustness to geometric attacks. Through this algorithm, a timestamp is added to the sequence of watermark images to solve the problems caused by attacks of interpretation. Later, researchers developed a zero-watermarking technique using polar complex exponential transforms (PCETs) and logistic maps [47]. In addition, radial harmonic Fourier moments (RHFMs) in ternary representation are employed to create an algorithm of zero-watermarking for stereo images [48]. Using a modified logistic map and SCA, Daoui et al. [7] suggested strong image encryption as well as a zero-watermarking approach. Image zero-watermarking and encryption are integrated in this approach, to provide a higher level of security while sending images over the internet.

Xia et al. [55] suggested an algorithm for zero-watermarking using fractional-order RHFMs for lossless copyright protection of the medical gray-scale images.‏ Hosny and Darwish [19] proposed a new scheme for zero-watermarking using multi-channel orthogonal Legendre-Fourier moments of fractional orders (MFrLFMs) to deal with color images. Based on new multi-channels shifted Gegenbauer moments of fractional orders (FrMGMs), Hosny and Darwish [17] presented a zero-watermarking scheme to protect the color images in the medical field. Roček et al. [33] presented an assessment analysis of zero-watermarking approaches ensuring the integrity and authorship of medical images.

The purpose of this research is to evaluate the effectiveness of chosen so-called zero-watermarking approaches in protecting the integrity and verifying the authorship of these medical image investigations. Wang et al. [46] proposed an octonion orthogonal moments theory applicable to zero-watermarking and color stereoscopic images. Xia et al. [54] proposed a zero-watermarking approach based on novel quaternion PCETs for color images. Gao et al. [13] combined PCETs with a self-organizing map and deep CNN to introduce a video zero-watermarking. Han et al. [15] presented a federated learning-based approach of zero-watermarking for protecting healthcare data. Hu et al. [20] developed an algorithm of zero-watermarking, which effectively protects the copyright of medical images and detects the tampering regions simultaneously. Ma et al. [30] utilized the ternary polar complex exponential transform and the chaotic system to propose an algorithm for zero-watermarking to protect two medical images simultaneously. Based on accurate quaternion generalized OFMMs, Wang et al. [49] introduced a color images zero-watermarking approach.‏

A new direction of research in zero-watermarking is presented by Fierro-Radilla [9] recently where they presented a zero-watermarking technique using CNN. According to [9], Han et al. [14] used VGG19 deep convolution neural network to introduce a robust algorithm for zero-watermarking.

Despite massive research work devoted to zero-watermarking, most present approaches have some issues and limitations. Some issues with existing zero-watermarking approaches can be summarized as follows:

  1. (1)

    Most of these approaches show less resistance against geometric attacks.

  2. (2)

    Most of these approaches show less resistance to combining common signal-processing attacks with geometric attacks.

  3. (3)

    Approaches have features that are extracted in the frequency domain, these features are not robust to geometric attacks. Furthermore, their applicability and time complexity are inferior.

  4. (4)

    While most traditional approaches for zero-watermarking are typically suitable for grayscale image protection, color image protection is more common.

  5. (5)

    Most approaches based on moments for zero-watermarking usually use the approximation method to compute these moments, which suffer from instability, are inaccurate, and are inefficient, which has a considerable influence on the performance of these moment-based approaches for the extraction of the features of host images.

Taking the above challenges into consideration and to overcome the significant limitations listed above, in this paper, we present a new zero-watermarking algorithm for color images using the VGG19 deep convolution neural network [14] which has excellent intrinsic properties. When compared to other forms of VGG19, CNNs improve network depth and use an alternate structure with numerous nonlinear activation layers and convolution layers [25]. This is beneficial for extracting precise features. As a result, and the preprocessing method, to extract deep feature maps from color images, we simply use the max pooling layers and convolution layers from the pre-trained VGG19, as opposed to image classification tasks. In contrast to other zero-watermarking methods, the proposed algorithm can extract high-level features from color images to increase anti-geometric zero-watermarking's attack capability. Additionally, a chaotic system based on a Chebyshev map known as 2D-LACM [27] is used to ensure the proposed algorithm's high level of security. The overall contributions of this work can be listed as:

  • The proposed algorithm used a deep conventional neural network called VGG19 to extract the robust essential features of the host color images for zero-watermarking.

  • The proposed algorithm achieves higher resistance against geometric attacks and common signal-processing attacks.

  • To enhance the zero-watermarking security, this paper uses a novel approach (2D-LACM) [27] to scramble the feature matrix and binary logo image.

  • Extensive experiments are conducted using standard color images to verify that the proposed technique has high resilience against various attacks and outperforms algorithms of zero-watermarking.

The proposed approach consists of four main stages. First, the feature map for the original color image is created during the CNN (VGG19) training process using the output of the second fully connected layer (fc_2); then, using a security chaos sequence created by a 2D-LACM, the extracted features are transformed to the binarized features. Next, the owner's watermark sequence is combined with the binarized features. Finally, an XOR operation is performed on the encrypted version of the binarized features of the image and the encrypted version of the binary watermark digits to create a verification key of ownership, also known as a zero-watermark.

We prove experimentally that our algorithm can successfully resist various types of attacks and this algorithm outperformed other algorithms.

The rest of the paper is organized as follows. Section 2 presents the definitions of extraction of VGGnet features, then gives 2D-LACM. Section 3 describes the suggested zero-watermarking scheme in detail, and in Section 4, we present the experimental results in detail. Finally, Section 5 concludes the paper.

2 Preliminaries

This section explains VGGnet feature extraction and the 2D-LACM in detail.

2.1 Extraction of VGGnet feature

VGGnet is a deep CNN type, that is frequently used in learning and feature extraction techniques [9]. VGG19, which Simonyan and Zisserman established in 2015, is the most commonly used VGGnet [38], which comprises 3 fully linked layers and 16 convolution layers (19 hidden layers). Convolution layers are used by VGG19 to extend the number of feature channels and extract image features using a sequence of 3 × 3 convolution kernels. If \({T}_{j}\) and \({\delta }_{j}\) are the weights and biases of the jth layer of convolution, the features may be retrieved as follows:

$${Z}_{j}^{out}=S\left({T}_{j}*{Z}_{j}^{in}+{\delta }_{j}\right),$$
(1)

where \({Z}_{j}^{in}\) and \({Z}_{j}^{out}\) indicate the feature maps input and output, respectively, and \(S\) denotes the rectified linear unit (ReLU). The stride in each layer of convolution is set to 1. VGG19 adopts layers of max pooling to reduce the size of the feature map to avoid calculation explosion. It is possible to translate the representation of the feature to the sample label space by connecting each node of the given layer to every node of the preceding layers in the fully linked layers as follows:

$$L={F}_{3}\left({F}_{2}\left({F}_{1}\left(MP\left({Z}_{16}^{out}\right)\right)\right)\right).$$
(2)

Where \(MP\) signifies the max-pooling operation and \({F}_{k}\) means the operation of the \({k}^{th}\) fully linked layer. A SoftMax layer generates the image categorization outcome at the end of VGG19:

$${L}_{i}=\frac{{e}^{{x}_{i}}}{{\sum }_{c=1}^{C}{e}^{{x}_{c}}} ,$$
(3)

in which \(C\) is the classification number, \({x}_{i}\) is the output of the \({i}^{th}\) node and \({L}_{i}\) is the probability of the \({i}^{th}\) node.

In our zero-watermark approach, we use the VGG19 network architecture shown in Fig. 1. The feature map was extracted using the second layer's output, as depicted in Fig. 1.

Fig. 1
figure 1

The VGG19 network architecture utilized to extract the feature map

2.2 Chebyshev map with a two-dimensional logistic adjustment

The logistic map is described as follows:

$${u}_{i+1}=\alpha {u}_{i}(1-{u}_{i}),$$
(4)

where \({u}_{i}\in \left[0, 1\right]\) and \(\alpha \in [0, 4]\). The Logistic map behaves chaotically when \(\alpha \in (3.569945972, ..., 4]\).

The Chebyshev map is a one-parameter chaotic low-dimensional system. This chaotic map's mathematical expression is given in Eq. (5).

$${u}_{i+1}=cos \left(\mu {cos}^{-1}\left({u}_{i}\right) \right),$$
(5)

where \(\mu\) is the Chebyshev map's control parameter, and when \(\mu\) is larger than one, this map begins to display chaotic behavior. The mathematical expression of the 2D-LACM is given in [27] as follows:

$$\left\{\begin{array}{c}{u}_{i+1}=\pi {e}^{(\beta \times {u}_{i}\times \left(1-{u}_{i}\right)+{v}_{i})}{cos}^{-1}\left({u}_{i}\right) mod 1,\\ {v}_{i+1}=\pi {e}^{(\beta \times {v}_{i}\times \left(1-{v}_{i}\right)+{u}_{i+1})}{cos}^{-1}\left({v}_{i}\right) mod 1.\end{array}\right.$$
(6)

In Eq. (6), \(\beta\) is the enhanced chaotic map's control parameter, which corresponds to the range [0, 4]. To improve the security of zero-watermarking, three chaotic sequences created by 2D-LACM are employed in this study to encrypt the watermark and scramble the binary feature sequences. To produce chaotic sequences, the secret key refers to the starting states (\({u}_{0}, {v}_{0})\) and parameter \(\beta\) of 2D-LACM.

3 Zero-watermarking algorithm for color image

The suggested technique is divided into two stages: generation of zero-watermarks and verification. The objective of zero-watermark generation is to utilize the essential features of the host image to produce a zero-watermark, and the goal of zero-watermark verification is to authenticate the original image's copyright. First, we discuss the encryption of the watermark in the proposed algorithm before describing the two steps in detail.

3.1 2D-LACM based encryption

The architecture for applying 2D-LACM in the proposed approach is depicted in Fig. 2. Encryption of a watermark (seen in the bottom portion of Fig. 2) randomly confuses the coordinates of a pixel and modifies a binary watermark image's bit values using bit operation diffusion and pixel-level scrambling. The process for encryption of the watermark via 2D-LACM is as follows, assuming the watermark \(W\) is \(P\times Q\) in size:

  1. (1)

    The chaotic system (6) is performed for \(P\times Q\) iterations using the secret keys \(S{K}_{1}=({u}_{0}^{1},{v}_{0}^{1}, {\beta }^{1})\) and obtain \(P\times Q\) values of \({v}_{i+1}\).

  2. (2)

    It is possible to generate a chaotic decimal sequence \(C{S}_{1}\) of length \(P\times Q\). Similarly, the chaotic decimal sequences \(C{S}_{2}\) and \({CS}_{3}\) are constructed using \(S{K}_{2}=({u}_{0}^{2},{v}_{0}^{2}, {\beta }^{2})\) and \(S{K}_{3}=\left({u}_{0}^{3},{v}_{0}^{3}, {\beta }^{3}\right).\)

  3. (3)

    According to the following equation, the chaotic sequence in decimal representation \(C{S}_{2}\) is turned into a binarized chaotic sequence.

    $${CS}_{2}\left(i\right)=\{1, when\, {CS}_{2}\left(i\right)\ge ME, 0, when\, {CS}_{2}\left(i\right) < ME, 1\le i\le P\times Q.$$
    (7)

wherein \(ME\) is the average of \({CS}_{2}\).

Fig 2.
figure 2

2D-LACM framework application in zero-watermarking

  1. (4)

    Based on the pixel location in \(C{S}_{1}\), \(C{S}_{1}\) is sorted ascendingly, and the associated index vector \(L=({L}_{1},{L}_{2},\dots , {L}_{P\times Q})\) is computed.

  2. (5)

    To acquire the shuffled watermark \({W}_{c}\), the watermark, \(W\) is rearranged as a vector \({W}_{b}\) of length, \(P\times Q\), and then scrambled at the level of pixel based on the index vector's elements \(L\).

  3. (6)

    XOR in the bit level is performed on the shuffled watermark \({W}_{c}\) to obtain the encrypted version \({W}_{sc}\) using the formula below:

    $${W}_{sc}={W}_{c}\oplus {CS}_{2} .$$
    (8)
  4. (7)

    To generate the \(P\times Q\) sized image of the encrypted watermark \({W}_{s}\), the sequence \({W}_{sc}\) of the encrypted watermark is rearranged.

In the watermark descrambling process, an inverse scrambling operation is done following a back-diffusion action, which is the inverse of watermark encryption. In addition, a binary feature sequence that has been scrambled in the upper portion of Fig. 2 by a decimal chaotic sequence \({CS}_{3}\) of length, \(P\times Q\) generated from \(S{K}_{3}=\left({u}_{0}^{3},{v}_{0}^{3}, {\beta }^{3}\right)\) in varying bit locations. The process of scrambling is the same as in steps 3 and 4 of watermark encryption.

3.2 Zero-watermark generation and verification

3.2.1 Generation of zero-watermark

Choosing a binary image with a specific meaning \(W\) with size \(P\times Q\), as the original watermark, and color image \(I\) with size \(M\times N\) as the original image. For the convenience of calculation, let \(P=Q=64\), \(M=N=512\) in the experiments. Figure 3 shows the watermarking generation procedure.

  • (1) We used the pre-trained VGG19 to extract the original color image's deep feature maps, \(FM(k,l,p)\) \(:\)

    $$I\left(i,j\right)\to VGG19\to FM\left(k,l,p\right),$$
    (9)

    where \(k\), \(l\), and \(p\) are the matrix dimensions of the feature map \(FM\) resulting from VGG19 the network architecture shown in Fig. 1, \(1\le k\le 10, 1\le l\le 10,\) and \(1\le p\le 4096\).

  • (2) We construct feature sequence \(BF\) by choosing \(P\times Q\) randomly features from the feature maps and then making binarization operations on each one.

    $$BF\left(r\right)=\left\{\begin{array}{l}1, if FM\left(r\right)\ge ME,\\ 0, otherwise.\end{array}\right.$$
    (10)

    Where r is a random number, \(1\le r\le 10\times 10\times 4096\) and \(ME\) is the average of feature maps chosen.

  • (3) Binary feature sequence permutation. With the secret key \(S{K}_{3}\), we scramble the binary feature sequence, \(BF\) to generate \(B{F}_{s}\) using a chaotic sequence in decimal representation \({CS}_{3}\) constructed from 2D-LACM, as discussed at the end of Section 3.1, in addition, we transformed \(BF\) to the matrix \(B{F}_{s}\) with size \(P\times Q\).

  • (4) Binary image encryption. With two secret keys \(S{K}_{1}\) and \(S{K}_{2}\) and to improve the zero-watermarking security, we encrypted the binary watermark to \({W}_{s}\) using chaotic sequences \({CS}_{1}\) and \(C{S}_{2}\) generated from 2D-LACM as described in Section 3.1.

  • (5) Zero-watermark construction. We apply XOR on the encrypted watermark image, \({W}_{s}\) and the scrambled binary feature matrix \(B{F}_{s}\) to construct the zero-watermark \({W}_{zero}\), as follows:

    $${W}_{zero}={W}_{s}\oplus {BF}_{s} .$$
    (11)
  • (6) Finally, we store the secret keys \(S{K}_{1}\), \(S{K}_{2}\) in Step 3 and \(S{K}_{3}\) in Step 4 and the zero-watermark, \({W}_{zero}\), in the database of copyright verification.

Fig. 3
figure 3

Process flow for zero-watermark generation

3.2.2 Verification of zero-watermark

The zero-watermark verification phase is mainly utilized to detect an image's watermark information. The verification process of the zero-watermark is depicted in Fig. 4, and the specific steps of the verification of the zero-watermark are presented here.

  • Step 1: Using Eq. 9, \(FM^{\prime}(k,l,p)\) was extracted from the attacked color image using VGG19.

  • Step 2: The feature sequence \(BF^{\prime}\) was created by randomly selecting \(P\times Q\) features from the feature maps \(FM^{\prime}\) and then binarizing each one using Eq. 10.

  • Step 3: The binary feature sequence, \(BF^{\prime}\), was scrambled using a chaotic sequence \({CS}_{3}\) used in the generation process, and then translate \(BF^{\prime}\) to the matrix \(B{F}_{s}^{\prime}\) of size \(P\times Q\).

  • Step 4: The encrypted watermark image generation. We apply XOR on the reserved zero-watermark, \({W}_{zero}\) and the scrambled matrix of binary feature \(B{F^{\prime}}_{s}\) of the verified image to construct the encrypted watermark image \({W^{\prime}}_{s}\) as follows:

    $${W^{\prime}}_{s}={W}_{zero}\oplus {B{F}^{\prime}}_{s} .$$
    (12)
  • Step 5: Retrieving the watermark image. Finally, we utilize 2D-LACM decryption of the encrypted watermark image \({W}_{s}^{\prime}\) to retrieve the verifiable watermark image \(W^{\prime}\).

Fig. 4
figure 4

Process flow for the verification of zero-watermark

The first step in the descrambling process, as shown in Fig. 4, is to use a binary chaotic sequence \(C{S}_{2}\) to execute a back-diffusion operation on the scrambled image \({W}_{s}^{\prime}\). This chaotic sequence is formed via a 2D-LACM with the secret key \(S{K}_{2}\). The goal of this procedure is to provide unpredictability and confusion to the image. Following the back-diffusion technique, an inverse scrambling operation is used to retrieve the original watermarking image \(W^{\prime}\). This process employs the ascending order index of a chaotic decimal sequence \(C{S}_{1}\), which is also created using a 2D-LACM with a secret key \(S{K}_{1}\). The goal of this procedure is to restore the original structure and appearance of the watermarking image. If the secret keys \(S{K}_{1}\) and \(S{K}_{2}\) are correct and match the ones used in the scrambling procedure, the reverse scrambling of the watermarking image will be identical to the original watermarking image. Figure 4 depicts a graphic illustration of this reverse scrambling process, illustrating how precisely it can retrieve the original watermarking image when exact secret keys are used.

4 Experimental results and analysis

In general, robust zero-watermarking requires both robustness and imperceptibility. Zero-watermarking has outstanding imperceptibility by nature, and robustness is a significant requirement. In this section, we will conduct five sets of experiments to validate the effectiveness of the zero-watermarking algorithm proposed in this paper.

4.1 Experimental setup

4.1.1 Data sets

From the well-known standard color image datasets, USC-SIPI [44] and Computer Vision Group (CVG) [6], we chose seven color images having 512 × 512 pixels as the host images, as displayed in Fig. 5(a-g). We used ten binary images with 64 × 64 pixels as the watermark, which are shown in Fig. 6(a–j). Table 1 shows the experimental parameters. Control parameters \(\gamma\) and three secret keys \(S{K}_{1} ,S{K}_{2 }\, and\, S{K}_{3}\) consisting of the initial values \({u}_{0}, {v}_{0}\) for logistic Chebyshev map are given in chaotic sequence generation as follows: \(S{K}_{1}=\left({u}_{0}^{1}=0.8633,{v}_{0}^{1}= 0.9234, {\beta }^{1}= 0.9956\right)\),\(S{K}_{2}=\left({u}_{0}^{2}= 0.897,{v}_{0}^{2}=0.9985, {\beta }^{2}= 0.9049\right)\), and \(S{K}_{3}=\left({u}_{0}^{3}=0.8622,{v}_{0}^{3}=0.9028, {\beta }^{3}=0.7112\right).\)

Fig. 5
figure 5

Seven test original color images (a-g) and a medical image for the brain (h)

Fig. 6
figure 6

Ten original watermarks (a-j)

Table 1 Experimental parameters

4.1.2 Performance evaluation metrics

The peak signal-to-noise ratio (PSNR) was utilized to assess the attacked image's quality as:

$$PSNR=10log\left(\frac{{255}^{2}\times M\times N\times 3}{{\sum }_{k=1}^{3}{\sum }_{x=1}^{M}{\sum }_{y=1}^{N}{\left[{I}_{k}\left(x,y\right)-{{I}^{\prime}}_{k}\left(x,y \right)\right]}^{2}}\right),$$
(13)

where \(I^{\prime}\left(x,y\right)\) and \(I\left(x,y\right)\) refer to the attacked and original image of size \(M\times N\), respectively, \(k\epsilon \{R,G,B\}\).

We evaluated the robustness of the proposed method using the bit error rate (BER) and normalized cross-correlation (NCC) of the retrieved watermark image. The following are the definitions of (BER) and (NCC):

$$BER=\frac{1}{P\times Q}\sum\limits_{i=1}^{P}\sum\limits_{j=1}^{Q}[W(i,j)\oplus W^{\prime}(i,j)] ,$$
(14)
$$NCC=\frac{{\sum }_{i=1}^{P}{\sum }_{j=1}^{Q}[W\left(i,j\right)*W^{\prime}(i,j)]}{\sqrt{{\sum }_{i=1}^{P}{\sum }_{j=1}^{Q}{\left[W\left(i,j\right)\right]}^{2}}\sqrt{{\sum }_{i=1}^{P}{\sum }_{j=1}^{Q}{\left[{W}^{\prime}\left(i,j\right)\right]}^{2}}} ,$$
(15)

where \(W(i, j)\) and \(W^{\prime}(i,j)\) are the original and retrieved watermark images with the size \(P \times Q\), respectively.

Obviously, the lower the BER, the higher the NCC, the better the robustness, and the higher the PSNR, the higher the quality.

4.2 Anti-attack performance analysis

Several experiments are carried out using conventional attacks to assess the robustness of the proposed zero-watermark technique. Different attacks such as rotation, scaling, brightness adjustment, filtering, additive noise, and JPEG compression are used on the color images in this subsection. Table 1 shows detailed parameter descriptions. The values of BER and NCC are calculated between the extracted watermark and the original one. Table 2 shows detailed descriptions of geometric and conventional signal-processing attacks.

Table 2 Attack types with varying parameters

The robustness of the proposed technique is examined in this section for common image processing and geometric attacks. The conducted experiments can be divided into two main parts according to the size of the original image:

  • We started with a standard color image called ‘Baboon’ of a size \(256 \times 256\), shown in Fig. 6. In Tables 3, 4, 5, 6, and 7, the values of PSNR, BER, and NCC of the proposed technique are computed for each attack and listed with the associated retrieved watermark image. A binary image, ‘Camel’, was selected and used as the watermark. The retrieved watermarks from the proposed approach are closer to the original, as demonstrated in Table 3. The resulting values of BER and NCC are close to optimal. The obtained results clearly show that the retrieved watermarks remained detectable.

  • Second experiments were carried out on the seven selected standard color images of size \(512 \times 512\), as depicted in Fig. 6. Tables 8, 9, 10, 11, 12, and 13 exhibit the PSNR and NCC values of the suggested method for each attack.

Table 3 Robustness against rotation attacks
Table 4 Robustness against scaling attacks
Table 5 Robustness against filtering attack
Table 6 Robustness against noise attack
Table 7 Robustness against JPEG compression and conventional combined attacks
Table 8 Robustness to rotation attacks
Table 9 Robustness to scaling attack
Table 10 Robustness to filtering attacks
Table 11 Robustness to noise attack
Table 12 Robustness against JPEG compression
Table 13 Robustness against conventional combined attacks

4.2.1 Anti-geometric attacks performance

The most common geometric attacks such as scaling and rotation attacks cause losing synchronization of the watermark detection. In this experiment, the test image is rotated by the rotation angles of 1°, 3°, 5°, and 10°, where Table 3 shows the results of the rotation attacks. At most angles, the NCC and BER values are 1.0 and 0, respectively, indicating that this approach can exhibit perfect resilience to rotation attacks. Then, the original image is scaled in this experiment by employing different cases such as reducing and magnifying with scaling factors of 0.25, 0.5, 2.0, and 4.0°, where Table 4 shows the results of the scaling attacks. The results in Table 4 show that after a scaling attack of 0.5, the BER and NCC values are 0.0004 and 0.9992, respectively, which are close to the ideal values of 0 and 1. And for other scaling factors, the BER and NCC are the optimal values, 1.0 and 0, respectively, indicating that this approach can exhibit perfect resilience to scaling attacks.

4.2.2 Anti-image processing attacks performance

Here, we use the most common image processing attacks such as filtering, noise, JPEG compression, sharpening (Unsharp masking), and histogram equalization attacks to assess the robustness performance of the proposed algorithm through the following experiments on the test color image ‘‘Baboon’’. First, the test color image was subjected to filtering attacks where Table 5 shows the filtered images and their corresponding PSNR values, together with the retrieved watermark images and their corresponding values BER and NCC. We can observe the NCC and BER values are near to the ideal values of 1.0 and 0, respectively and the original and extracted watermarks are extremely close. These results demonstrate that the proposed algorithm can effectively survive image-filtering attacks. Secondly, the noise attacks for the test color image were conducted with Salt & pepper noise and Gaussian noise. Table 6 shows the noisy images and their corresponding PSNR values, together with the retrieved watermark images and their corresponding values BER and NCC. From Table 6, we can observe that the BER and NCC values are close to the ideal value 0 and 1. And especially for the Salt & pepper noise attacks, three values of the BER and NCC are the optimal values, 1.0 and 0, respectively. The retrieved watermarks are extremely similar and identical to the original ones, demonstrating that this approach has strong resilience to noise attacks. Then, the test color image was subjected to JPEG compression attacks with the compression-quality factors as Q = 5, 10, 50, and 90, where Table 7 shows the compressed images and their corresponding PSNR values, together with the retrieved watermark images and their corresponding values BER and NCC. The results in Table 7 show that after a compression attack of Q = 10, the BER and NCC values are 0.0002 and 0.9996, respectively, which are close to the ideal value 0 and 1. And for other quality factors, the BER and NCC are the optimal values, 1.0 and 0, respectively and the retrieved watermarks are identical to the original ones, demonstrating that this approach can exhibit perfect resilience to JPEG compression attacks.

Finally, the sharpening, histogram equalization, and conventional combined attacks for the test color image were conducted where the attacked images and their corresponding PSNR values are displayed in Table 7, along with the watermark images that were retrieved and their corresponding BER and NCC values. Because the BER and NCC values are close to the ideal value 0 and 1 in some cases and in other cases are the same as the optimal values, 1.0 and 0, respectively. Additionally, the retrieved watermarks are extremely similar and identical to the original ones, demonstrating that this approach is capable of resisting these attacks.

4.3 Comparison of robustness with existing works

To fully assess how well the presented method performed, we conducted two comparisons in this section. In the first comparison, we first conduct various types of attacks on the seven selected standard color images of size 512 × 512, whose parameters are listed in Table 2. Then, we compute and select the minimum NCC values and the average of the PSNR values, as shown in Tables 8, 9, 10, 11, 12, and 13.

We summarized the obtained results in Table 14. And for readability and simplicity, the obtained results are depicted in Fig. 7. Finally, in Fig. 7 and Table 14, we compare these values with the results obtained by the zero-watermarking method [22]. The PSNR values of the proposed algorithm and method [22] for various attacks are presented in Table 14 and Fig. 7a. The results obtained from Fig. 7a and Table 14 show that the proposed method has higher image quality than the zero-watermarking approach [22]. Moreover, the results of Fig. 7b and Table 14 show that the proposed algorithm's NCC values for various attack results are very close to the ideal value of 1.0 and higher than the compared values in [22]. These results demonstrate that the proposed approach can effectively survive image various attacks compared to the zero-watermarking approach [22].

Table 14 PSNR averages and NCC minimums against various attacks
Fig. 7
figure 7

Comparison between the proposed algorithm and the algorithm [22] under various attacks: (a) Represents the comparison for the average of PSNR values, and (b) Represents the comparison for the minimum values of NCC

In another comparison, the robustness of the proposed algorithm is compared with the zero-watermarking algorithms [4, 22, 45, 52, 61]. For various attacks, results of the comparison experiment are shown in Fig. 8 and Table 15, where the attacks include scaling (0.5, 2.0), rotation (3 \(^\circ\), 5 \(^\circ\), 10 \(^\circ\)), median filtering (3 × 3, 5 × 5), Gaussian filtering (3 × 3, 5 × 5), average filtering (3 × 3, 5 × 5), Gaussian noise (0.001, 0.005, 0.01), Salt & pepper noise (0.01, 0.02), and JPEG compression (30, 50, 75). In comparison to the zero-watermarking algorithms [4, 22, 45, 52, 61], the proposed algorithm performs better, as can be observed from the comparison experiment results in Fig. 8 and Table 15. According to the experimental findings, the BER values of the proposed algorithm are very close to the optimal 0. This demonstrates the evident increase in resilience against various attacks of the proposed algorithm over the zero-watermarking algorithms [4, 22, 45, 52, 61].

Fig. 8
figure 8

BER value comparison between the proposed algorithm and existing algorithms [4, 22, 45, 52, 61] against different attacks

Table 15 BER value comparison between the proposed algorithm and existing algorithms [4, 22, 45, 52, 61] against different attacks

4.4 PSNR-based comparative analysis

This experiment aims to examine the efficiency of the proposed algorithm for various host images and watermark sizes. Table 16 gives a comparative analysis based on PSNR. For the same attacks used in Table 15, this experiment employs different host image sizes according to different watermark sizes. The results show that when the watermark sizes are changed, the PSNR values of the attacked image remain very close or the same. This implies that the quality of the original image is unaffected by the size of the watermark. This emphasizes the significance of adopting the suggested method to protect intellectual property rights while maintaining image quality.

Table 16 Comparison of PSNR values for different host image sizes according to different watermark sizes

4.5 Comparative analysis for medical images

The robustness of the proposed technique is examined in this section for common image processing and geometric attacks. We select from the ‘Whole-Brain Atlas’ [23] a medical color MRI image of size \(256 \times 256\), shown in Fig. 5 (h). In Table 17, the values of PSNR, BER, and NCC of the proposed algorithm for a medical image are computed, and in Table 18, these values are compared with the standard color images for different attacks. The results highlight the effectiveness and robustness of our proposed algorithm against various types of attacks and its capability of extracting the watermark from medical images effectively.

Table 17 Robustness of medical image against various attack
Table 18 Comparative analysis between the standard color image and medical image for the proposed algorithm

5 Conclusion

A robust zero-watermark algorithm for the color images based on the VGG19 and 2D chaos map was proposed in this paper. The novelty of the proposed algorithm includes:

  • The construction of a zero-watermark is obtained by using a VGG19 deep CNN.

  • 2D-LACM is utilized to confuse and diffuse the original image feature matrix and watermark image.

  • The proposed algorithm for zero-watermarking in this paper can be used to protect the copyright of color images under the requirements for strong robustness against geometric and signal processing attacks and the excellent visual quality of images.

We demonstrated experimentally that the proposed algorithm outperformed zero-watermarking algorithms. In the future, we will improve the proposed algorithm's security and robustness performance to extend its application to e-healthcare, telemedicine, stereoscopic, and real-time captured images.