Skip to main content
Log in

Embedding capacity estimation of reversible watermarking schemes

  • Published:
Sadhana Aims and scope Submit manuscript

Abstract

Estimation of the embedding capacity is an important problem specifically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embedding, without actually embedding the watermark. We demonstrate this for a class of reversible watermarking schemes which operate on a disjoint group of pixels, specifically for pixel pairs. The proposed algorithm iteratively updates the co-occurrence matrix at every stage to estimate the multi-pass embedding capacity, and is much more efficient vis-a-vis actual watermarking. We also suggest an extremely efficient, pre-computable tree based implementation which is conceptually similar to the co-occurrence based method, but provides the estimates in a single iteration, requiring a complexity akin to that of single pass capacity estimation. We also provide upper bounds on the embedding capacity. We finally evaluate performance of our algorithms on recent watermarking algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7

Similar content being viewed by others

References

  • Alattar A 2004 Reversible watermark using the difference expansion of a generalized integer transform. IEEE Trans. Image Process. 13(8): 1147–1156

  • Borse R and Chaudhuri S 2010 Computation of embedding capacity in reversible watermarking schemes. Proceedings of the ACM’s ICVGIP-10, Chennai, India

  • Celik M and Sharma G 2005 Lossless generalized-lsb data embedding. IEEE Trans. Image Process. 14(2): 253–266

  • Coltuc D and Chassery J 2006 High capacity reversible watermarking. Proceedings of the IEEE International Conference on Image Procesing ICIP pp. 2565–2568

  • Coltuc D and Chassery J 2007 Very fast watermarking by reversible contrast mapping. IEEE Signal Process. Lett. 14(4): 255–258

  • Cover T, Thomas J, and MyiLibrary 1991 Elements of information theory, vol 6. Wiley Online Library

  • Cox I 2008 Digital watermarking and steganography. New York: Morgan Kaufmann, Second Edition

    Google Scholar 

  • Feng J, Lin I, Tsai C, and Chu Y 2006 Reversible watermarking:current status and key issues. Int. J. Network Security 12(3): 161–171

  • Golub G and VanLoan C 1996 Matrix computations 3rd edition. Baltimore: The John Hopkins University Press

    Google Scholar 

  • Haralick R, Shanmugam K, and Dinstein I 1973 Textural features for image classification. IEEE Trans. Syst. Man Cybern. 3(6): 610–621

  • Hong W and Chenb T S 2011 Reversible data embedding for high quality images using interpolation and reference pixel distribution mechanism. Elsevier J. Visual Commun. Image Representation 22(2): 131–140

  • Kalker T and Willems M 2002 Capacity bounds and constructions for reversible data-hiding. IEEE International Conference on DSP-2002

  • Kamastra L and Heijmans H 2005 Reversible data embedding into images using wavelet techniques and sorting. IEEE Trans. Image Process. 14(12): 2082–2090

  • Latif-Amet A, Ertuzun A, and Ercil A 2000 An efficient method for texture defect detection: sub-band domain co-occurrence matrices. Elsevier J. Image Vis. Comput. 18(6–7): 543–553

  • Leea C F, Chenb H L, and Laia S H 2010 An adaptive data hiding scheme with high embedding capacity and visual image quality based on smvq prediction through classification codebooks. Elsevier J. Image Vis. Comput. 28(8): 1293–1302

  • Li C 2005 Reversible watermarking scheme with image-independent embedding capacity. IEEE Trans. Vis. Image Signal Process. 152(6): 779–786

  • Michie D 1968 Memo functions and machine learning. Nature 218 (1): 19–22

  • Pissanetzky S 1984 Sparse matrix technology. Academic Press London

  • Roberto C, Francesco F, and Rudy B 2010 Reversible watermarking techniques: An overview and a classification. EURASIP Journal on Information Security

  • Sachnev V, Kim H, Nam J, Suresh S, and Shi Y 2009 Reversible watermarking algorithm using sorting and prediction. IEEE Trans. Circuits Syst. Video Technol. 19 (7): 989–999

  • Shannon C 1948 A mathematical theory of communication. Bell Syst. Tech. J. 27 (10): 623–656

  • Thodi D M and Rodriquez J 2007 Expansion embedding techniques for reversible watermarking. IEEE Trans. Image Process. 16 (3): 721–730

  • Tian J 2003 Reversible data embedding using a difference expansion. IEEE Trans. Circuits Syst. Video Technol. 13 (8): 890–896

  • Tseng H W and Chang C C 2008 An extended difference expansion algorithm for reversible watermarking. Elsevier J. Image Vis. Comput. 26 (8): 1148–1153

  • Vleeschouwer C, Delaigle J, and Macq B 2001 Circular interpretation of histogram for reversible watermarking. In: Proceedings of the IEEE 4th Workshop on Multimedia Signal Processing, pp. 345–350

  • Weng S, Zhao Y, Pan J, and Ni R 2008 Reversible watermarking based on invariability and adjustment on pixel pairs. IEEE Signal Process. Lett. 15: 721–724

Download references

Acknowledgement

We thank Subhasis Das, Siddhant Agrawal, Ronak Shah and Prof. Sibi-Raj Pillai from Indian Institute of Technology Bombay for hints and discussions. We are also thankful to the reviewers for the suggestions to improve the quality of presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to RUSHIKESH BORSE.

Appendix A

Appendix A

Theorem 3.

Under multi-pass embedding, the embedding capacity estimated by the co-occurrence based algorithm (algorithm 1) is exactly the same as the one estimated by the pixel-pair tree based algorithm. In particular,

$$\begin{array}{@{}rcl@{}} &{\boldsymbol{1)}}& \sum\limits_{\xi \in \mathbb{D}} C_{k}(\xi) b_{\xi} = \sum\limits_{\xi \in \mathbb{D}} C_{0}(\xi) \sum\limits_{s \in \mathbf{S}_{\xi}} \left(~\prod\limits_{m=0}^{k} {p_{s[m]}} \right) {{b}_{s[k]}}\\ &{\boldsymbol{2)}}& \sum\limits_{k = 0}^{P-1} \sum\limits_{\xi \in \mathbb{D} } C_{k}(\xi) b_{\xi} = \sum\limits_{\xi \in \mathbb{D}} C_{0}(\xi) \sum\limits_{s \in \mathbf{S}_{\xi}} \left(~\prod\limits_{k=0}^{P-1} {p_{s[k]}} \right) \sum\limits_{k=0}^{P-1} {{b}_{s[k]}} \end{array} $$

Proof:

For every pixel pair ξ, let N ξ represent the set of nodes corresponding to the pixel-pair tree of ξ. Let nN ξ refer to a node, i.e., a pixel pair in the pixel-pair tree of ξ. Further, let s n represent any path of the tree, containing the node n and let d n represent the depth of node n. Hence s n [d n ]=n. Now we prove the first part of the theorem. We start with the LHS and from algorithm 1 it is evident that for any node nN ξ with depth d n =k, the number of pixel pairs contributed by the pair ξ, to the pixel pair n is \(C(\xi ){\prod }_{m = 0}^{k} p_{s_{n}[m]}\). This is clear from the updation step in algorithm 1. We then break down the sum \({\sum }_{\xi \in \mathbb {D}} C_{k}(\xi ) b_{\xi }\), into contributions from every node nN ξ with depth d n =k, from every pixel pair in \(\xi \in \mathbb {D}\). In other words, we can write:

$$ \sum\limits_{\xi \in \mathbb{D}} C_{k}(\xi) b_{\xi} = \sum\limits_{\xi \in \mathbb{D}} \sum\limits_{n \in \mathbf{N}_{\xi} : d_{n} = k} C_{0}(\xi)\prod\limits_{m = 0}^{k} p_{s_{n}[m]} b_{s_{n}[k]}. $$
(42)

Now it is obvious from the definition that: \( {\sum }_{n \in \mathbf {N}_{\xi } : d_{n} = k} {\prod }_{m = 0}^{k} p_{s_{n}[m]} b_{s_{n}[k]} = {\sum }_{s \in \mathbf {S}_{\xi }}{\prod }_{m=0}^{k} {p_{s[m]}} {b}_{s[k]}\). Hence using this in Eq. (42) we can prove the first part of the theorem.

We now consider the second part. Using the result from earlier section we can write:

$$\begin{array}{@{}rcl@{}} \sum\limits_{k = 0}^{P-1} \sum\limits_{\xi \in \mathbb{D}} C_{k}(\xi) b_{\xi} &=& \sum\limits_{\xi \in \mathbb{D}} \sum\limits_{k = 0}^{P-1} \sum\limits_{n \in \mathbf{N}_{\xi} : d_{n} = k} C_{0}(\xi)\prod\limits_{m = 0}^{k} p_{s_{n}[m]} b_{s_{n}[m]}\\ &=& \sum\limits_{\xi \in \mathbb{D}} \sum\limits_{n \in \mathbf{N}_{\xi}} C_{0}(\xi)\prod\limits_{k = 0}^{d_{n}} p_{s_{n}[k]} b_{s_{n}[d_{n}]}. \end{array} $$

In the above expression, observe that we are summing over every node in the pixel-pair tree as k goes from 0 to P−1. Hence we re-write the expression, by explicitly summing over every node in the pixel-pair tree. We finally replace the variable m by k, to obtain the above expression. In order to prove the second part of the theorem, it is sufficient to show that for every pixel-pair ξ:

$$ \sum\limits_{n \in \mathbf{N}_{\xi}} C_{0}(\xi).\prod\limits_{k = 0}^{d_{n}}p_{s_{n}[k]} b_{n} = \sum\limits_{s \in \mathbf{S}_{\xi}} C_{0}(\xi) \prod\limits_{k = 0}^{P-1} p_{s[k]} \sum\limits_{k = 0}^{P-1} b_{s[k]}. $$

Note that we used the fact that \( b_{s_{n}[d_{n}]} = b_{n}\). We then start with the R.H.S as:

$$\begin{array}{@{}rcl@{}} ~\text{R.H.S} &=& \sum\limits_{s \in \mathbf{S}_{\xi}} C_{0}(\xi) \prod\limits_{k = 0}^{P-1} p_{s[k]} \sum\limits_{k = 0}^{P-1} b_{s[k]} \\ &=& \sum\limits_{n \in \mathbf{N}_{\xi}} \sum\limits_{s \in \mathbf{S}_{\xi}} C_{0}(\xi) \prod\limits_{k = 0}^{P-1} p_{s[k]} {~}_{k = 0}^{P-1} b_{s[k]} I(s[k] = n) \\ &=& \sum\limits_{n \in \mathbf{N}_{\xi}} \sum\limits_{s \in \mathbf{S}_{\xi}} C_{0}(\xi) \prod\limits_{k = 0}^{P-1} p_{s[k]} b_{n} I(s[k] = n). \end{array} $$

In the above equation, we consider only those paths which pass through the node n. Let S n be a subset of paths which pass through node n. Hence we can write:

$$\begin{array}{@{}rcl@{}} ~\text{R.H.S} &=& \sum\limits_{n \in \mathbf{N}_{\xi}} \sum\limits_{s \in \mathbf{S}_{n}} C_{0}(\xi) \prod\limits_{k = 0}^{d_{n}} p_{s[k]} \prod\limits_{k = d_{n} + 1}^{P} p_{s[k]} b_{n} \\ &=& \sum\limits_{n \in \mathbf{N}_{\xi}} C_{0}(\xi) \prod\limits_{k = 0}^{d_{n}} p_{s_{n}[k]} b_{n} \sum\limits_{s \in \mathbf{S}_{n}} \prod\limits_{k = d_{n} + 1}^{P} p_{s[k]} \\ &=& \sum\limits_{n \in \mathbf{N}_{\xi}} C_{0}(\xi) \prod\limits_{k = 0}^{d_{n}} p_{s_{n}[k]} b_{n} \\ &=& ~\text{L.H.S.} \end{array} $$

Observe that the nodes in the paths sS n with depth greater than d n , form a complete subtree and hence we have: \({\sum }_{s \in \mathbf {S}_{n}} {\prod }_{k = d_{n} + 1}^{P-1} p_{s[k]} = 1\). Further, also observe that the nodes with a depth ≤d n occur in every path in sS n . Hence we replace these nodes by s n [k], since s n is a path in S n . Hence proved. Note that in this proof though we have proved the equivalence of the tree based and co-occurrence based algorithms for the estimates of \(\omega (\mathcal {B})\) and \(\omega (\mathcal {B}_{k})\), the same proof can be used to prove the equivalence of the estimates of \(\eta (\mathcal {B})\) and \(\eta (\mathcal {B}_{k})\).

Lemma 1.

For any function \(f: \mathbb {S} \to [0.5,1]\) , \(\underset {\mathbf {s} \in \mathbb {S}}{\min } H_{0}(f(\mathbf {s})) \equiv H_{0}(\underset {\mathbf {s} \in \mathbb {S}}{\max } f(\mathbf {s}))\) . Similarly for any function \(g: \mathbb {S} \to [0,0.5]\) , \(\underset {\mathbf {s} \in \mathbb {S}}{\min } H_{0}(g(\mathbf {s})) \equiv H_{0}(\underset {\mathbf {s} \in \mathbb {S}}{\min } g(\mathbf {s}))\).

Proof:

First, note that H 0(z) is a decreasing function for 0.5≤z≤1. Hence H 0(z 1)≤H 0(z 2) implies z 1z 2. Using this fact, we now prove this lemma by contradiction. Assume that \(\mathbf {s}_{1} = \underset {\mathbf {s} \in \mathbb {S}}{~\text { arg min }} H_{0}(f(\mathbf {s}))\) and \(\mathbf {s}_{2} = \underset {\mathbf {s} \in \mathbb {S}}{~\text { arg max }} f(\mathbf {s})\). Further, assume that f(s 1)≠f(s 2). Then clearly by definition f(s 1)≤f(s 2) and H 0(f(s 1))≤H 0(f(s 2)). This is a contradiction since H 0(z) is a decreasing function. Thus \(f(\mathbf {s}_{1}) \equiv f(\mathbf {s}_{2}) \Rightarrow \underset {\mathbf {s} \in \mathbb {S}}{\min } H_{0}(f(\mathbf {s})) \equiv H_{0}(\underset {\mathbf {s} \in \mathbb {S}}{\max } f(\mathbf {s}))\). Similarly, we can prove the second part.

Lemma 2.

For any function \(h: \mathbb {S} \to [0,1]\) , \(\underset {\mathbf {s} \in \mathbb {S}}{\min } H_{0}(h(\mathbf {s})) \ge \min (H_{0}(\underset {\mathbf {s} \in \mathbb {S}}{\max } h(\mathbf {s})), H_{0}(\underset {\mathbf {s} \in \mathbb {S}}{\min } h(\mathbf {s})))\).

Proof:

This lemma directly follows from Lemma 1. If h(s)≤0.5. then using lemma 1, we have: \(\underset {\mathbf {s} \in \mathbb {S}}\min H_{0}(h(\mathbf {s})) \equiv H_{0}(\underset {\mathbf {s} \in \mathbb {S}}\min h(\mathbf {s}))\). Similarly if h(s)≥0.5, \(\underset {\mathbf {s} \in \mathbb {S}}\min H_{0}(h(\mathbf {s})) \equiv H_{0}(\underset {\mathbf {s} \in \mathbb {S}}\max h(\mathbf {s}))\). Hence \(\underset {\mathbf {s} \in \mathbb {S}}\min H_{0}(h(\mathbf {s}))\) is definitely greater than the minimum of these two and hence proved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

IYER, R., BORSE, R. & CHAUDHURI, S. Embedding capacity estimation of reversible watermarking schemes. Sadhana 39, 1357–1385 (2014). https://doi.org/10.1007/s12046-014-0288-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12046-014-0288-8

Keywords

Navigation