Skip to main content

Spatial Image Steganography Incorporating Adjacent Dependencies

  • Conference paper
  • First Online:
Artificial Intelligence and Security (ICAIS 2022)

Abstract

Most of the existing model-based image steganographic schemes in spatial domain assume the independence among adjacent pixels, thus ignores the embedding interactions among neighbouring pixels. In this paper, we propose a new image steganographic scheme by taking advantages of the correlations between each pixel and its eight-neighbourhood and determine the embedding cost of the pixel through the minimization of the KL-divergence between the cover and stego objects. Experimental results demonstrate that our scheme show superior or comparable performance to two state-of-the-art algorithms: HILL and MiPOD in resisting steganalysis detectors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Scikit-learn 0.19.1 documentation. https://scikit-learn.org/stable/modules/covariance.html. Accessed 7 Nov 2019

  2. Bas, P., Filler, T., Pevný, T.: Break our steganographic system: the ins and outs of organizing BOSS. In: Proceedings of the 13th International Workshop on Information Hiding, pp. 59–70 (2011)

    Google Scholar 

  3. Chen, G., Zhu, F., Heng, P.A.: An efficient statistical method for image noise level estimation. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 477–485 (2015). https://doi.org/10.1109/ICCV.2015.62

  4. Chen, X., Zhang, Z., Qiu, A., Xia, Z., Xiong, N.N.: Novel coverless steganography method based on image selection and stargan. IEEE Trans. Netw. Sci. Eng. 9(1), 219–230 (2022). https://doi.org/10.1109/TNSE.2020.3041529

    Article  Google Scholar 

  5. Denemark, T., Sedighi, V., Holub, V., Cogranne, R., Fridrich, J.: Selection-channel-aware rich model for steganalysis of digital images. In: IEEE International Workshop on Information Forensics and Security (2014). https://doi.org/10.1109/WIFS.2014.7084302

  6. Filler, T., Judas, J., Fridrich, J.: Minimizing additive distortion in steganography using syndrome-trellis codes. IEEE Trans. Inf. Forens. Secur. 6(3), 920–935 (2011). https://doi.org/10.1109/TIFS.2011.2134094

    Article  Google Scholar 

  7. Fridrich, J., Kodovský, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf. Forens. Secur. 7(3), 868–882 (2012). https://doi.org/10.1109/TIFS.2012.2190402

    Article  Google Scholar 

  8. Fridrich, J., Kodovsky, J.: Multivariate gaussian model for designing additive distortion for steganography. In: IEEE ICASSP, pp. 2949–2953 (2013). https://doi.org/10.1109/ICASSP.2013.6638198

  9. Holub, V., Fridrich, J.: Designing steganographic distortion using directional filters. In: International Workshop on Information Forensics and Security, pp. 234–239 (2012). https://doi.org/10.1109/WIFS.2012.6412655

  10. Holub, V., Fridrich, J., Denemark, T.: Universal distortion function for steganography in an arbitrary domain. EURASIP J. Inf. Secur. 2014(1), 1–13 (2014). https://doi.org/10.1186/1687-417X-2014-1

    Article  Google Scholar 

  11. Kodovský, J., Fridrich, J., Holub, V.: Ensemble classifiers for steganalysis of digital media. IEEE Trans. Inf. Forens. Secur. 7(2), 432–444 (2012). https://doi.org/10.1109/TIFS.2011.2175919

    Article  Google Scholar 

  12. Li, B., Tan, S., Wang, M., Huang, J.: Investigation on cost assignment in spatial image steganography. IEEE Trans. Inf. Forens. Secur. 9(8), 1264–1277 (2014). https://doi.org/10.1109/TIFS.2014.2326954

    Article  Google Scholar 

  13. Li, B., Wang, M., Huang, J., Li, X.: A new cost function for spatial image steganography. In: International Conference on Image Processing, ICIP, pp. 4206–4210 (2014). https://doi.org/10.1109/ICIP.2014.7025854

  14. Luo, Y., Qin, J., Xiang, X., Tan, Y.: Coverless image steganography based on multi-object recognition. IEEE Trans. Circuits Syst. Video Technol. 31(7), 2779–2791 (2021). https://doi.org/10.1109/TCSVT.2020.3033945

    Article  Google Scholar 

  15. Pevný, T., Bas, P., Fridrich, J.: Steganalysis by subtractive pixel adjacency matrix. IEEE Trans. Inf. Forens. Secur. 5(2), 215–224 (2010). https://doi.org/10.1145/1597817.1597831

    Article  Google Scholar 

  16. Pevný, T., Filler, T., Bas, P.: Using high-dimensional image models to perform highly undetectable steganography. In: Information Hiding, 12th International Conference, vol. 6387, pp. 161–177 (2010). https://doi.org/10.1007/978-3-642-16435-4_13

  17. Sedighi, V., Cogranne, R., Fridrich, J.: Content-adaptive steganography by minimizing statistical detectability. IEEE Trans. Inf. Forens. Secur. 11(2), 221–234 (2016). https://doi.org/10.1109/TIFS.2015.2486744

    Article  Google Scholar 

  18. Sedighi, V., Fridrich, J.J., Cogranne, R.: Content-adaptive pentary steganography using the multivariate generalized gaussian cover model. In: Proceedings of the SPIE, Electronic Imaging, Media Watermarking, Security, and Forensics 2015, vol. 9409 (2015). https://doi.org/10.1117/12.2080272

  19. Yang, Z.L., Guo, X.Q., Chen, Z.M., Huang, Y.F., Zhang, Y.J.: RNN-stega: linguistic steganography based on recurrent neural networks. IEEE Trans. Inf. Forens. Secur. 14(5), 1280–1295 (2019). https://doi.org/10.1109/TIFS.2018.2871746

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grants U1726315, 61772573 and U1936212 and U19B2022, in part by the Key-Area Research and Development Program of Guangdong Province under grant 2019B010139003, and in part by Shenzhen R&D Program under grant GJHZ20180928155814437.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiangqun Ni .

Editor information

Editors and Affiliations

Appendices

Appendix

A Decomposition of \(p(0 \sim 8)\)

Figure 1, illustrates 9 pixels of one block with index \(\mathrm{{\{ 0}} \sim \mathrm{{8\} }}\), where every two pixels are adjacent if connected or independent if disconnected. Based on conditional independence, for two independent pixels, i, j, we can easily obtain:

$$\begin{aligned} p(i,j) = p(i|j) \cdot p(j) = p(i) \cdot p(j), \end{aligned}$$
(15)

Let \(i \leftrightarrow j\) represents adjacent pixels i, j, we proceed with a group of pixels according to conditional independence:

$$\begin{aligned} \begin{aligned} p(i,{j_1}, \ldots {j_m})&= p(i|{j_1}, \ldots {j_m}) \cdot p({j_1}, \ldots {j_m})\\&= p(i|{j_k},i \leftrightarrow {j_k}) \cdot p({j_1}, \ldots {j_m}). \end{aligned} \end{aligned}$$
(16)

Then, it’s obvious to get (17) and subsequently we take the second term of former formula repeatedly and use (16) to simply them:

$$\begin{aligned} p(0 \sim 8) = p(5|0 \sim 4,6 \sim 8) \cdot p(0 \sim 4,6 \sim 8) = p(5|0,1,2) \cdot p(0 \sim 4,6 \sim 8), \end{aligned}$$
(17)
$$\begin{aligned} p(0 \sim 4,6 \sim 8) = p(6|0 \sim 4,7,8) \cdot p(0 \sim 4,7,8) = p(6|0,2,3) \cdot p(0 \sim 4,7,8), \end{aligned}$$
(18)
$$\begin{aligned} p(0 \sim 4,7,8) = p(8|0 \sim 4,7) \cdot p(0 \sim 4,7) = p(8|0,1,4) \cdot p(0 \sim 4,7), \end{aligned}$$
(19)
$$\begin{aligned} p(0 \sim 4,7) = p(7|0 \sim 4) \cdot p(0 \sim 4) = p(7|0,3,4) \cdot p(0 \sim 4), \end{aligned}$$
(20)
$$\begin{aligned} \begin{aligned} p(0 \sim 4)&= p(4|0,1,2,3) \cdot p(0,1,2,3) = p(4|0,1,3) \cdot p(0,1,2,3) \\&= \frac{{p(0,1,3,4)}}{{p(0,1,3)}} \cdot p(0,1,2,3), \end{aligned} \end{aligned}$$
(21)
$$\begin{aligned} p(0,1,2,3) = p(1|0,2,3) \cdot p(0,2,3) = p(1|0,2) \cdot p(0,2,3) = \frac{{p(0,1,2)}}{{p(0,2)}} \cdot p(0,2,3), \end{aligned}$$
(22)

For the numerator and denominator in first term in (21):

$$\begin{aligned} p(0,1,3,4) = p(3|0,1,4) \cdot p(0,1,4) = p(3|0,4) \cdot p(0,1,4) = \frac{{p(0,3,4)}}{{p(0,4)}} \cdot p(0,1,4), \end{aligned}$$
(23)
$$\begin{aligned} p(0,1,3) = p(1|0,3) \cdot p(0,3) = p(1|0) \cdot p(0,3) = \frac{{p(0,1)}}{{p(0)}} \cdot p(0,3), \end{aligned}$$
(24)

Substitute (22), (23) and (24) into (21), (21) into (20), (20) into (19), (19) into (18) and (18) into (17) in turn, (17) can be expressed as:

$$\begin{aligned} p(0 \sim 8) = \frac{{p(1,0,2,5) \cdot p(3,0,2,6) \cdot p(3,0,4,7) \cdot p(1,0,4,8) \cdot p(0)}}{{p(0,1) \cdot p(0,2) \cdot p(0,3) \cdot p(0,4)}}. \end{aligned}$$
(25)

B 2D and 4D Steganographic Fisher Information matrix

We mimic Eq. (3), where one-dimensional steganographic Fisher Information proposed, to construct 2D and 4D steganographic Fisher Information matrix in this proof. First, elements in 2D steganographic Fisher Information matrix are approximated as:

$$\begin{aligned} \begin{aligned} I_{ij}^{(12)}(0)&= I_{ij}^{(21)}(0) = {\sum _{m,n}}\frac{1}{{{p_{m,n}}}}(\frac{{\partial {q_{m,n}}({\beta _i},{\beta _j})}}{{\partial {\beta _i}}}{|_{{\beta _i},{\beta _j} = 0}}) \cdot (\frac{{\partial {q_{k,l}}({\beta _i},{\beta _j})}}{{\partial {\beta _j}}}{|_{{\beta _i},{\beta _j} = 0}}), \\&I_{ij}^{(11)}(0) = {\sum _{m,n}}\frac{1}{{{p_{m,n}}}}{(\frac{{\partial {q_{m,n}}({\beta _i},{\beta _j})}}{{\partial {\beta _i}}}{|_{{\beta _i},{\beta _j} = 0}})^2}, \\&I_{ij}^{(22)}(0) = {\sum _{m,n}}\frac{1}{{{p_{m,n}}}}{(\frac{{\partial {q_{m,n}}({\beta _i},{\beta _j})}}{{\partial {\beta _j}}}{|_{{\beta _i},{\beta _j} = 0}})^2}. \end{aligned} \end{aligned}$$
(26)

\({p_{m,n}}\), the abbreviation of \( p_{m,n}^{i,j}\), is the probability \(P\{ {i} = m,{j} = n\}\), which can viewed as a point \(f({x_i = m},{x_j = n})\) in the two-dimensional Gaussian distribution. Although \({x_i},{x_j}\) are both discrete integer variables, we suppose \(f({x_i},{x_j})\) is continuous for differentiate:

$$\begin{aligned} f({x_i},{x_j}) = \frac{1}{{2\pi {\sigma _i}{\sigma _j}\sqrt{1 - \rho _{ij}^2} }}\exp ( - \frac{1}{{2(1 - \rho _{ij}^2)}}(\frac{{x_i^2}}{{\sigma _i^2}} - \frac{{2{\rho _{ij}}{x_i}{x_j}}}{{{\sigma _i}{\sigma _j}}} + \frac{{x_j^2}}{{\sigma _j^2}})). \end{aligned}$$
(27)

Given \({p_{m,n}} = f(m,n),{p_{m \pm 1,n}} = f(m \pm 1,n),{p_{m,n \pm 1}} = f(m,n \pm 1)\), considering the Taylor approximation at points \((m \pm 1,n),(m,n \pm 1)\), we have:

$$\begin{aligned} f(m \pm 1,n) \approx f(m,n) \pm \frac{{\partial f({x_i},{x_j})}}{{\partial {x_i}}}{|_{{x_i} = m,{x_j} = n}} + \frac{1}{2} \cdot \frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_i}^2}}{|_{{x_i} = m,{x_j} = n}}, \end{aligned}$$
(28)
$$\begin{aligned} f(m,n \pm 1) \approx f(m,n) \pm \frac{{\partial f({x_i},{x_j})}}{{\partial {x_j}}}{|_{{x_i} = m,{x_j} = n}} + \frac{1}{2} \cdot \frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_j}^2}}{|_{{x_i} = m,{x_j} = n}}. \end{aligned}$$
(29)
$$\begin{aligned} - 2{p_{m,n}} + {p_{m - 1,n}} + {p_{m + 1,n}} \approx \frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_i}^2}}{|_{{x_i} = m,{x_j} = n}}, \end{aligned}$$
(30)
$$\begin{aligned} - 2{p_{m,n}} + {p_{m,n - 1}} + {p_{m,n + 1}} \approx \frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_j}^2}}{|_{{x_i} = m,{x_j} = n}}. \end{aligned}$$
(31)

After substituting (5), (27), (30) and (31) into (26), and taking variables m, n, s, t continuous, we can finally derive (32)–(34). We give an example formula for element ij in 4\(\,\times \,\)4 Fisher Information matrix as (35), which can be easily generalized to other elements, their calculation results do not list because of the length limit.

$$\begin{aligned} \begin{aligned} I_{ij}^{(11)}(0)&= \sum \limits _{m,n} {\frac{1}{{{p_{m,n}}}}{{( - 2{p_{m,n}} + {p_{m - 1,n}} + {p_{m + 1,n}})}^2}} \\&\approx \int \limits _m {\int \limits _n {\frac{1}{{f\mathrm{{(}}{x_i},{x_j}\mathrm{{)}}}}{{\mathrm{{(}}\frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_i}^2}}{|_{{x_i} = m,{x_j} = n}}\mathrm{{)}}}^\mathrm{{2}}}} } dmdn = \frac{2}{{\sigma _i^2{{(1 - \rho _{ij}^2)}^2}}}. \end{aligned} \end{aligned}$$
(32)
$$\begin{aligned} \begin{aligned} I_{ij}^{(22)}(0)&= \sum \limits _{m,n} {\frac{1}{{{p_{m,n}}}}{{( - 2{p_{m,n}} + {p_{m,n - 1}} + {p_{m,n + 1}})}^2}} \\&\approx \int \limits _m {\int \limits _n {\frac{1}{{f\mathrm{{(}}{x_i},{x_j}\mathrm{{)}}}}{{\mathrm{{(}}\frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_j}^2}}{|_{{x_i} = m,{x_j} = n}}\mathrm{{)}}}^\mathrm{{2}}}} } dmdn = \frac{2}{{\sigma _j^2{{(1 - \rho _{ij}^2)}^2}}}. \end{aligned} \end{aligned}$$
(33)
$$\begin{aligned} \begin{aligned} I_{ij}^{(12)}(0)&= I_{ij}^{(21)}(0) \\&= \sum \limits _{m,n} {\frac{1}{{{p_{m,n}}}}( - 2{p_{m,n}} + {p_{m - 1,n}} + {p_{m + 1,n}}) \cdot ( - 2{p_{m,n}} + {p_{m,n - 1}} + {p_{m,n + 1}})} \\&\approx \int \limits _m {\int \limits _n {\frac{1}{{f\mathrm{{(}}{x_i},{x_j}\mathrm{{)}}}}} } \mathrm{{(}}\frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_i}^2}}{|_{{x_i} = m,{x_j} = n}}\mathrm{{)}} \cdot \mathrm{{(}}\frac{{{\partial ^2}f({x_i},{x_j})}}{{\partial {x_j}^2}}{|_{{x_i} = m,{x_j} = n}}\mathrm{{)}}dmdn \\&= \frac{{2\rho _{ij}^2}}{{{\sigma _i}{\sigma _j} \cdot {{(1 - \rho _{ij}^2)}^2}}}. \end{aligned} \end{aligned}$$
(34)
$$\begin{aligned} \begin{aligned} I_{ijkl}^{ij}(0)&= \sum \limits _{m,n,s,t} \frac{1}{{{p_{m,n,s,t}}}} \cdot (\frac{{\partial {q_{m,n,s,t}}}}{{\partial {\beta _i}}}{|_{{\beta _i},{\beta _j},{\beta _k},{\beta _l} = 0}}) \cdot (\frac{{\partial {q_{m,n,s,t}}}}{{\partial {\beta _j}}}{|_{{\beta _i},{\beta _j},{\beta _k},{\beta _l} = 0}})\\&= \sum \limits _{m,n,s,t} \frac{1}{{{p_{m,n,s,t}}}} \cdot ( - 2{p_{m,n,s,t}} + {p_{m - 1,n,s,t}} + {p_{m + 1,n,s,t}}) \\&\cdot ( - 2{p_{m,n,s,t}} + {p_{m,n - 1,s,t}} + {p_{m,n + 1,s,t}}) \\&\approx \int \limits _m {\int \limits _n {\int \limits _s {\int \limits _t {\frac{1}{{f({x_i},{x_j},{x_k},{x_l})}}(\frac{{{\partial ^2}f({x_i},{x_j},{x_k},{x_l})}}{{\partial {x_i}^2}}{|_{{x_i} = m,{x_j} = n,{x_k} = s,{x_l} = t}})} } } } \\&{{\cdot ( \frac{{{\partial ^2}f({x_i},{x_j},{x_k},{x_l})}}{{\partial {x_j}^2}}{|_{{x_i} = m,{x_j} = n,{x_k} = s,{x_l} = t}})dsdtdmdn}}. \end{aligned} \end{aligned}$$
(35)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tong, Y., Ni, J., Su, W., Hu, X. (2022). Spatial Image Steganography Incorporating Adjacent Dependencies. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2022. Lecture Notes in Computer Science, vol 13340. Springer, Cham. https://doi.org/10.1007/978-3-031-06791-4_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06791-4_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06790-7

  • Online ISBN: 978-3-031-06791-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics