1 Introduction

Digital image watermarking is a process that embeds a watermark into a digital image to form a watermarked image. The watermark in the watermarked image is classified as visible or invisible based on the human visual perception. Visible watermarks are used to declare the ownership of digital images, and provide copyright as well as copy protection in active way, thereby decreasing the unauthorised use of the original image. Conversely, invisible watermarks provide only copyright protection but are unable to provide copy protection as it dose not impade the unauthorize access of copyrighted items. In general, there are three major requirements for visible watermarking [13].

  • Perceptibility: The watermark must be easily identified to provide copyright information.

  • Transparency: The watermark must not significantly obscure the image details over which it covers.

  • Robustness: The original image should not be easily recovered from the watermarked image by an illicit user [3, 4].

The task of embedding a visible watermark involves a trade-off between its perceptibility and transparency. A law-abiding customer must have error-free access to the image content, mandating that a visible watermark be reversible in nature. Several reversible watermarking schemes [57] have been proposed for specific image types such as medical, military, and satellite, etc. Mintzer et al. [1] were the first to propose a reversible visible watermark model in which the watermarked image can be viewed for free and the original image can be reconstructed from the watermarked image at an additional cost. Mohanty et al. [2] proposed a visible watermarking technique in the discrete cosine transform (DCT) domain. Their scheme modifies each DCT coefficient by designated scaling and embedding factors. Watermarked images employing this scheme, Iw, can be defined by Iw = αC + βW, where C and W are the DCT coefficients of original image and watermark, respectively. The parameters α and β are determined by exploiting the texture sensitivity of the human visual model (HVS). Huang and Tang [8] proposed a visible watermarking scheme with a discrete wavelet transformation (DWT). The intensity of the watermark in different regions of the image varies depending on the underlying content of the image and human sensitivity to the spatial frequency. However, these two schemes are not reversible. Hu et al. [9] proposed a removable visible watermarking. The scheme uses a key to determine the unchanged coefficients, and the pixel-wise varying parameters are calculated from those unchanged coefficients. Thus, the user with the correct key in the receiver end can recalculate the parameters to remove the visible watermark from the watermarked image. However, the scheme cannot perfectly recover the original image due to a rounding error caused by wavelet transform. Another study [10] adopted removable visible watermarking schemes that allow authenticated users to automatically remove the embedded visible watermark. Nevertheless, these methods do not manage to recover the original image quality, and result in visually impaired images. Impaired images are unsuitable for military, legal and medical applications in which high-quality image recovery is mandatory. The reversible visible watermarking schemes presented in Refs. [1114] were able to recover the original image quality. However, the embedded visible watermark with the method proposed in Ref. [12] significantly reduces the quality of the watermarked image, while the methods presented in Refs. [13, 14] require a copy of the original watermark to recover the original image. The quality of watermark image and recover watermark in [15] is not so good. Liu and Tsai [18] proposed a generic lossless visible watermarking scheme based on the compound mapping of pixels as a benchmark for reversible, visible watermarking; however, the original watermark is required to recover the original image, with which there may be some resolution issues because the recovery scheme estimates neighbouring pixels. In Xuan et al. [17] propose a distortionless data embedding technique based on integer wavelet transform and achive good embedding capacity but the robustness of watermarked image is weak and the visual quality of watermarked image is not good enough. The proposed work is also based on the pixel mapping technique, but the watermark is applied using alternate pixels, i.e., the watermark is embedded without altering the four nearest neighbouring pixels, that are scaled by the average contrast-sensitive function [16] to yield the watermarked image and for the correct image estimation upon recovery. The proposed mechanism not only provide us flexibility for embedding large watermark (nearly equal to the size of original image) but also preserves visual quality, as will be demonstrated in a latter section.

The remainder of this paper is organised as follows: Sect. 2 presents the mathematical model of the complex mapping and proposed schemes, Sect. 3 presents the experimental results, and Sect. 4 summarises the concluding remarks.

2 Proposed scheme

In this section, the complex compound mapping is first described. The process is then demonstrated to be reversible, and finally the proposed algorithm based on the complex compound mapping is examined. The process of mapping can be defined as transforming one set of values into another by using some parameter. If different component values are mapped by different parameters then this type of mapping is called as complex compound mapping as demonstrated in Fig. 1. Here input is image and pixel values of image are mapped with certain parameter A, B and C. This nesting of compound maps can be repeated indefinitely to form a complex compound structure.

Fig. 1
figure 1

Depiction of complex compound scheme

Under the experimental conditions, P = {p1, p2, …, pm} are a set of distinct values (pixels value of image) that are to be converted into another set of values defined as Y = {y1, y2, …, ym}. The respective mapping from pm to mi for all values of m = 1, 2, 3, …, m is completely reversible. The complex compound mapping, fx, is governed by the parameter x, such that x = {A, B, C} and A ≠ B ≠ C. The whole process can be illustrated as follows:

Let q be the intermediate value such that

$$ {\text{Q}} = {\text{ f}}_{\text{A}} ({\text{P}}) $$
(1)

Leading to the circumstance that if fA(P) = Q then \( {\text{f}}_{\text{a}}^{ - 1} \left( {\text{Q}} \right) = {\text{P}} \).

Now, the variable Q is defined as an intermediate set of values of qm for m = 1, 2, 3, …, m, while Q1 and Q2 are two subsets of Q such that \( Q_{1} \cup Q_{2} = Q. \)

The variables Y1 and Y2 also form two subsets of Y such that \( {\text{Y}}_{1} \cup {\text{Y}}_{2} = {\text{Y}} \). Additionally, Y1 and Y2 can be defined as:

$$ {\text{Y}}_{ 1} = {\text{f}}_{\text{B}} ({\text{Q}}_{ 1} )\;{\text{and}}\;{\text{Y}}_{ 2} = {\text{f}}_{\text{C}} ({\text{Q}}_{ 2} ) $$
(2)

The definition of these two variable leads to:

$$ Y = \{ f_{B} (Q_{1} )\} \cup \{ f_{C} (Q_{2} )\} $$
(3)
$$ Y = \{ f_{B} (f_{A} [P])\} \cup \{ f_{C} (f_{A} [P])\} $$
(4)

This mapping is reversible if we can derive the value of all P from Y using the following equation:

$$ P = \left\{ {f_{A}^{ - 1} \left[ {f_{B}^{ - 1} \left( {Y_{1} } \right)} \right]} \right\} \cup \left\{ {f_{A}^{ - 1} \left[ {f_{C}^{ - 1} \left( {Y_{2} } \right)} \right]} \right\} $$
(5)

Lemma 1

If \( Y = \left\{ {f_{A}^{ - 1} \left[ {f_{B}^{ - 1} \left( {Y_{1} } \right)} \right]} \right\} \) for any complex compound mapping with function f x with parameter x then \( P = \left\{ {f_{A}^{ - 1} \left[ {f_{B}^{ - 1} \left( {Y_{1} } \right)} \right]} \right\}\mathop {\bigcup }\nolimits \left\{ {f_{A}^{ - 1} \left[ {f_{C}^{ - 1} \left( {Y_{2} } \right)} \right]} \right\} \) for all values of P, Y, A, B and C.

Proof

Substituting the value of Y1 and Y2 from (2) in (5) results in the following:

$$ P = \left\{ {f_{A}^{ - 1} \left[ {f_{B}^{ - 1} \left( {f_{B} \left\langle {Q_{1} } \right\rangle } \right)} \right]} \right\} \cup \left\{ {f_{A}^{ - 1} \left[ {f_{C}^{ - 1} \left( {f_{C} \left\langle {Q_{2} } \right\rangle } \right)} \right]} \right\} $$
(6)

Substituting the values \( \left\{ {f_{A}^{ - 1} \left[ {f_{B}^{ - 1} \left( {f_{B} \left\langle {Q_{1} } \right\rangle } \right)} \right]} \right\} = f_{A}^{ - 1} \left( {Q_{1} } \right) \) and \( \left\{ {f_{A}^{ - 1} \left[ {f_{C}^{ - 1} \left( {f_{C} \left\langle {Q_{2} } \right\rangle } \right)} \right]} \right\} = f_{A}^{ - 1} \left( {Q_{2} } \right) \) in (6) yields

\( P = \left\{ {f_{A}^{ - 1} \left[ {Q_{1} } \right]} \right\} \cup \left\{ {f_{A}^{ - 1} \left[ {Q_{2} } \right]} \right\} \) which can also be written as

\( P = \left\{ {f_{A}^{ - 1} \left[ {Q_{1} \cup Q_{2} } \right]} \right\} \) and \( Q_{1} \cup Q_{2} = Q \), thus leading to

$$ P = \left\{ {f_{A}^{ - 1} \left[ Q \right]} \right\} $$
(7)

substituting the value of Q from (1) into the RHS of (7) results in

$$ P = \left\{ {f_{A}^{ - 1} \left( {f_{A} \left[ P \right]} \right)} \right\} $$

where \( {\text{f}}_{\text{A}}^{ - 1} \) and fA cancel and P is obtained as expected according to lemma 1.

The process of embedding the visible watermark in this algorithm is based on complex compound mapping. In this method, the HVS perception as well as the image content upon which the visible watermark is embedded is considered. The scaling factor, α, is derived by transforming the non-overlapping 8 × 8 block image into the appropriate DCT domain. The scaling factors are derived from the transform coefficients obtained from the 8 × 8 block defined as α n according to:

$$ \alpha_{n} = \frac{1}{{\sqrt {2\prod \sigma^{2} } }}\exp \left[ {\frac{ - [Jn(1,1) - \mu ]}{{2\sigma^{2} }}^{2} } \right] + \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\nu } n $$
(8)
$$ \alpha = \frac{{\alpha_{n} }}{N} $$
(9)

where N represents the total number of 8 × 8 blocks, J n (1,1) represent the DC component of each block and the other parameters are defined as follows:

$$ \mu = \frac{1}{N}\sum\limits_{n = 1}^{N} {J_{n} (1,1)} $$
(10)
$$ \sigma^{2} = \frac{1}{N}\sum\limits_{n = 1}^{N} {[J_{n} (1,1) - \mu ]}^{2} $$
(11)

The scaling factor \( \alpha \) is then averaged from all the recorded values of \( \alpha_{n} \). More information about this method can be found in [11, 16]. The proposed state-of-the-art was performed in this study by taking the arithmetic mean (α) of all scaling factors \( \alpha_{n} \) and embedding factor as (1 − α). Instead of embedding a watermark in every pixel, the watermark image was embedded in a single pixel without altering its four nearest neighbour pixels, as shown in Fig. 2. The intensity of four nearest neighbours pixels are reduced by the scaling factor (α) determined by using the contrast-sensitive function. This method effectively reduces the chances of incorrectly estimating the pixel values during the recovery process.

Fig. 2
figure 2

Depiction of the watermark embedding and the altering of the four nearest neighbour pixels value

Our method is based on Pixel mapping which may leads to overflow/underflow problem. The values of some pixels in the marked image may exceed the upper bound and/or below the lower bound. If this problem may persist it may violate lossless criteria. To avoid this we use modulo 256 addition along with a location map. A location map is a m by n matrix whose value may be either 0, 1 and empty. The Pixel mapping which may cause under/overflow during embedding, as not used for embedding. The pixel value of original image remains unchanged. The corresponding location is marked with 0 in location map. Similarly 1 is placed if pixel mapping does not cause overflow/underflow problem and corresponding pixel value of original image is mapped with new value. If none of the above condition occurs the location map remain empty. During extraction for any pixel if we found zero in corresponding location map we keep the pixel value unchanged. If 1 is encountered in location map we run our extraction algorithm. The algorithm for watermark embedding and extraction is summarised below.

2.1 Watermark embedding and extraction algorithm

I[m,n] is the original image matrix , and W[p,q] is the watermark image matrix. Ideally, the watermark image, W[p,q], should be embedded within the original image, I[m,n]. So the size of the watermark image, W, is less than or equal to the size of the original image, I. Let the Value of Imn pixel of I[m,n] be imn and value of Wmn pixel of W[m,n] be wmn. L[row,col] is Location map with corresponding value of Lrow,col is lrow,col

2.2 Watermark extraction algorithm

VWmxn is the watermarked image, Spxq is the area of the orignal image corresponding to Wpxq.

The following flow chart will explain the watermark embedding and extraction process (see Figs. 3, 4).

Fig. 3
figure 3figure 3

Flow graph (a, b, c) demonstrating watermark embedding process

Fig. 4
figure 4

Flow graph (a, b) demonstrating watermark extraction process

Fig. 5
figure 5

Performance comparison of embedded distortion on Lena image with different watermark coverage area in terms of PSNR (a), WSNR (b) and SSIM (c)

Fig. 6
figure 6

Performance comparison of embedded distortion on F-16 image with different watermark coverage area in terms of PSNR (a), WSNR (b) and SSIM (c)

3 Results

The proposed watermarking scheme has been implemented in MATLAB and tested on several standard grayscale images and has achieved satisfactory results. Here the results of commonly used grayscale images “Lena” and “F-16” are provided. Watermark of different size has been added for testing maximum PSNR achievable in reference of watermark area and embedded bits per pixel. To subjectively assess the visual quality of the visible watermarked image we uses WSNR and Structural similarity model in this paper [19, 20]. Then we compare proposed work with other state-of-the-art reversible watermarking algorithms proposed by Yang et al. [11] and Liu and Tsai [18]. A comparative plot has been drawn in Figs. 5 and 6 depicting that our scheme outperform as compare to other two scheme. The experimental results are tabulated in Table 1. From data in PSNR column in the table, we can see that, with the increase in payload PSNR value gradually decreases. The minimum value of PSNR is obtained for F-16 which has large smoother region then Lena image. Similarly WSNR and SSIM values of this test images is minimum for F-16. This suggest that visible noise distortion is more in smooth regions, which can be further demonstrated by seeing watermarked images (Fig. 7). Watermark is more visible in F-16 image.

Table 1 PSNR values with varying watermark coverage area
Fig. 7
figure 7

F-16 and Lena images with different size of watermark embedded

The SSIM value in third column of Table 1 is near one for most images which implies that our watermarking scheme well preserve the visual quality of host image. As the size of watermark increases the SSIM value between original image and watermarked image decreases. This might be due to increase in quantization error that is introduced by our method. This may be thought as drawback of proposed scheme. On comparing with other state-of-the-art, proposed scheme well preserves the visual quality of original image. In Tables 2, 3 and 4 the performance of proposed scheme is compared with other state-of-the-art proposed in [14, 19] on the basis of image quality by employing WSNR and SSIM and the large value indicate the higher quality watermarked image. These comparison result suggest that the fidelity of watermarked image by proposed method is better then other. The higher value of PSNR shows the better performance over other state-of-the-art. Since Yang et al embed reconstruction packet within the watermarked image, so as larger the watermark area less will be the area for reconstruction packet. Thus embedding performance is affected and leading to less visual distortion. Which is reflected with higher SSIM value for larger watermark and can be verified in Table 3.

Table 2 Performance comparison with different state-of art on the basis of PSNR at different watermark coverage area
Table 3 Performance comparison with different state-of art on the basis of WSNR at different watermark coverage area
Table 4 Performance comparison with different state-of art on the basis of SSIM at different watermark coverage area

3.1 Security aspects

In our Scheme we require the value of scaling factor ‘α’ and location map to retrieve original image. Thus only legitimate user which have correct location map and correct value of scaling factor ‘α’ can extract correctly. The location map can be embedded in encrypted form within watermark image, but it limits the size of watermark that can be embedded in original image.

4 Conclusions

In this study we presented a reversible visible watermarking technique based on pixel mapping. The proposed algorithm considers contrast sensitive functions for the estimation of pixel values to achieve the desired features of visible watermarking. As demonstrated by the experimental results, the proposed method better preserve the visual quality of image. In contrast to the previous reversible watermarking scheme we can add watermark of size nearly equal to the size of host image. The PSNR value for the original and unrecovered watermark image is better than the other methods currently in used. Future work may be directed at making this method blind recoverable and at finding more applications for the presented method such as video watermarking.