A Novel Robust Image Forensics Algorithm Based on L1-Norm Estimation

  • Xin He
  • Qingxiao Guan
  • Yanfei Tong
  • Xianfeng Zhao
  • Haibo Yu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10082)


To improve the robustness of the typical image forensics with the noise variance, we propose a novel image forensics approach that based on L1-norm estimation. First, we estimate the kurtosis and the noise variance of the high-pass image. Then, we build a minimum error objective function based on L1-norm estimation to compute the kurtosis and the noise variance of overlapping blocks of the image by an iterative solution. Finally, the spliced regions are exposed through K-means cluster analysis. Since the noise variance of adjacent blocks are similar, our approach can accelerate the iterative process by setting the noise variance of the previous block as the initial value of the current block. According to analytics and experiments, our approach can effectively solve the inaccurate locating problem caused by outliers. It also performs better than reference algorithm in locating spliced regions, especially for those with realistic appearances, and improves the robustness effectively.


Image splicing L1-norm estimation Noise variance Image forensics 

1 Introduction

With the rapid development of the Internet and the widely use of the electric device, digital images, as a kind of media information carrier, are being spread broadly in the daily life. However, the ease of the image-editing software brings not only the great convenience, but also some security problems such as much easier forgery by image processing. Digital image forgery can cause adverse effects, for example, serious misleading and misjudgment will occur if tampered images are used as key evidences by media or the legal departments. It may have serious impacts on society and public opinion. Therefore, image forensics becomes an urgent problem.

There are various ways to tamper images, the most typical one is image composition. There are two ways to composite images: image copy-move and image splicing. Both methods leave no visual abnormality and their difference lies on whether the tampered regions are from the same picture. The former aims to erase parts of contents in an image by padding them with patches from other regions, while the later tampers the image by pasting the forgery region copied from another image. Compared to image copy-move, spliced images are more harmful and detecting them is more complicated owing to diverse spliced regions. Thus, it is more difficult to find an effective method for detecting this kind of forgery. Therefore, in this paper, we focus on the image forensics for splicing images.

In recent years, a growing attention has been paid to image forensics. Currently, there are three main kinds of algorithms to image forensics. They are based on the Color Filter Array (CFA), lighting and noise respectively. For the CFA algorithm [1, 2], the main idea is the camera with different CFA model and CFA interpolation algorithm makes pixels have particular correlation relationship. And for spliced images, regions from different images will destroy such relationship. Therefore the spliced region can be classified through the consistency of pixels. However, CFA based method is unable to locate the spliced regions and is not robust to image resize operation. Besides, this method requires other prior-knowledge of CFA. Another kind of method utilizes the light condition in the image. Johnson [3, 4] proposed the approach that exposing image forgery by detecting inconsistencies in lightings. It first extracts the closed boundaries from the image and clips them into several blocks. Then, it estimates the lighting direction of every block and calculates the lighting direction consistency to determine whether the image is tampered. This approach could detect spliced images efficiently, but if the object surface does not follow Lambert radiator, or the weather is cloudy or the object is inside the room, it could hardly detect the lighting source. Therefore, its usage is limited. To solve the limitations of the two algorithms, some researchers proposed the new detection algorithm with the image noise. It uses intrinsic noises of the image to distinguish the normal region and spliced region. Furthermore, since the universal existence of the camera noise, this kind of approach needs less prior knowledge and doesn’t need any training samples. Accordingly, this kind algorithm has universal applicability and advantages. Gou et al. [5] used image denoising, wavelet analysis, and neighborhood prediction to obtain statistical features of noise. This approach is easy to implement but needs filters with high performance. If images have many intensive variant areas, it may have errors. Amer [6] presented a method to estimate the variance of additive white noise in images and frames. The method finds intensity-homogeneous blocks first and then estimates the noise variance in these blocks taking image structure into account. It uses eight high-pass operators to measure the high-frequency image components.

Existing methods have drawbacks of high complexity and low robustness. In order to solve these problems, we propose a new method based on L1 norm estimation to detect the forgery region. This approach can be viewed as an improved extension of noise variance estimation based forgery detection methods proposed by Pan et al. [7]. In our method, we first show that the noise variation estimation can be described as a linear regression, and then convert it to a L1-norm regression problem to increase the robustness. In this paper, we use the linear program to solve L1 norm regression to estimate the variance level of noise for each block. Since the adjacent blocks in the image have many overlapping pixels, we design an acceleration strategy of the iterative calculation in block-wise noise variance estimation. This method reduces the computation complexity.

The rest of this paper is organized as follows: Sect. 2 describes the camera noise estimation method, and in that section we review the relationship between kurtosis and noise variance, which is proposed in [7]. It lies the foundation of our work. Section 3 is the main part of this paper. It analyzes and summarizes the general form of noise variance estimation, and proposes L1 norm based variance estimation method and an acceleration strategy. The experiment result and analysis will be described in Sect. 4. The last section is the summary and future work.

2 Noise Estimation Based Forgery Detection

Pan proposed an approach to detect image splicing region with inconsistent local noise variances. The foundation of the approach is that tampered region in a spliced image is copied from another image, and noise level of two images are different. In fact, noise in a digital image can be generated through many factors in the real world, including camera sensor, quantization, and JPEG compression. It is common that noise distribution and level of two images are different, which may leave the artifact in spliced image. Thus, we can determine the images integrity and locate the spliced regions by taking advantages of the inconsistent noise variance. On the other head, since the image is mixed by noise and image content, we cannot directly obtain the noise. Therefore, using high-pass filters to obviate the image content first, then modeling the kurtosis and noise variance to estimate the noise variance of regions.

In this section, we first introduce the definition of kurtosis, from which we can find the relationship between variance and kurtosis in a filtered image region. Then we briefly summarize and review the noise variance estimation method proposed in [7] along with it’s analysis.

2.1 Kurtosis

Kurtosis is a statistical measure defined by (1):
$$\begin{aligned} \kappa = \frac{ E[(x - E(x))^4] }{ (E(x - E(x))^2)^2 } - 3. \end{aligned}$$
Provided we have n identical independent distribution samples \(x_i,i=1,2,...n\), drawing from certain distribution, we can estimate their kurtosis by (2):
$$\begin{aligned} \kappa = \frac{ \frac{ 1 }{ n-1 } \sum _{i=1}^{n}{(x_i - \overline{x})^4} }{ \left[ \sum _{i=1}^{n}{(x_i - \overline{x})^2} \right] ^2 } - 3, \end{aligned}$$
where \(\overline{x} = \sum _{i=1}^{n}{x_i} / n\) is the mean value of samples. Kurtosis is used for describing the probability density function of the random variable. The kurtosis of Gaussian distribution is 0. If the kurtosis is bigger than 0, it suggests a distribution with a high peak, which is called leptokurtic. While if it is less than 0, the shape of the distribution is flat and is called platykurtic.

According to some research [8, 9, 10], the kurtosis of high-pass filtered natural images is positive, and it tends to be a constant, which is called kurtosis concentration.

2.2 Noise Variance Estimation

Although it is obvious that regions from different image source have different noise variance, we are unable to directly model the noise by the variance of pixels. There are two obstacles:
  1. 1.

    Image noise is coupled with the image content, and such content signal is much more intensive than the noise components, which causes the failure of estimating the true variance of noise.

  2. 2.

    Noise may come from different distributions. It is improper to assume that types of noise are all zero mean Gaussian. For some other distributions, only using variance is insufficient to describe them since they may be also determined by other parameters.


Pan’s method solves these two problems by using high-pass filters. According to the procedure of linear filtering, high-pass filters not only depresses image content but also adds up noise from pixels with it’s weight, which makes noise component in filtered images approximately conforms to Gaussian distribution. Such property results from the Laplacian Central Limit Theorem.

After filtering, the kurtosis and noise variance of each local region in filtered images can be calculated. Because this method aims to estimate the noise variance of candidate image, it uses the relationship between kurtosis and noise variance from candidate image and filtered images to build the estimation model. More specifically, by the definition of kurtosis and variance, we can deduce (3):
$$\begin{aligned} \widetilde{\kappa _k} = \kappa _k ( \frac{ \widetilde{\sigma }_k^2 - \sigma ^2 }{ \widetilde{\sigma }_k^2 } )^2, \end{aligned}$$
where \(\kappa \), \(\sigma ^2\) denote the kurtosis and noise variance of candidate image, respectively. While \(\widetilde{\kappa _k}\), \(\widetilde{\sigma }_k^2\) represent those of filtered image with the k-th filter, which is computed from regional pixels in filtered image. As mentioned above, considering the kurtosis is positive, we can take square root of (3) to yield (4):
$$\begin{aligned} \sqrt{\widetilde{\kappa _k}} = \sqrt{\kappa _k} (\frac{ \widetilde{\sigma }_k^2 - \sigma ^2 }{ \widetilde{\sigma }_k^2 }). \end{aligned}$$
Suppose we have K filters in total, we have to utilize \(\widetilde{\kappa _k}\), \(\widetilde{\sigma }_k^2\) to estimate \(\kappa \) and \(\sigma ^2\) (\(k = 1,2,...,K\)). However, it is important to note that (3) and (4) are built in the case of pure noise signal, and \(\widetilde{\kappa _k}\), \(\widetilde{\sigma }_k^2\) are calculated from filtered images, which means \(\kappa \), \(\sigma ^2\), \(\widetilde{\kappa _k}\), \(\widetilde{\sigma }_k^2\) would not rigorously follow to (3) for two reasons:
  1. 1.

    Although the filter somehow can depress image content, it is impossible to eliminate the influence of image content. Thus it may cause inaccuracy of calculating \(\widetilde{\kappa _k}\) and \(\widetilde{\sigma }_k^2\).

  2. 2.

    \(\widetilde{\kappa _k}\) and \(\widetilde{\sigma }_k^2\) are calculated from a local region with size of \(32 \times 32\) in the filtered image. Since the number of pixels in the local region is finite, these statistics inevitably suffer from some instability.

For these two reasons, this method converts the estimation task into a regression problem as in (5):
$$\begin{aligned} \mathop {argmin} \limits _{\sqrt{\kappa }, \sigma ^2} \sum _{k=1}^{K} \left[ \sqrt{\widetilde{\kappa _k}} - \sqrt{\kappa } (\frac{ \widetilde{\sigma }_k^2 - \sigma ^2 }{ \widetilde{\sigma }_k^2 }) \right] ^2. \end{aligned}$$
Obviously the regression model jointly utilizes result from K filtered images, and we can infer the closed form of the optimal solution of (5) as (6) shows:
$$\begin{aligned} \sqrt{\kappa } = \frac{ \langle \sqrt{\widetilde{\kappa _k}} \rangle _k \langle \frac{1}{(\widetilde{\sigma }_k^2)^2} \rangle _k - \langle \frac{\sqrt{\widetilde{\kappa _k}}}{\widetilde{\sigma }_k^2} \rangle _k \langle \frac{1}{\widetilde{\sigma }_k^2} \rangle _k}{ \langle \frac{1}{(\widetilde{\sigma }_k^2)^2} \rangle _k - \langle \frac{1}{\widetilde{\sigma }_k^2} \rangle _k^2 } \end{aligned}$$
$$\begin{aligned} \sigma ^2 = \frac{1}{\langle \frac{1}{\widetilde{\sigma }_k^2} \rangle _k} - \frac{1}{\sqrt{\kappa }} \frac{\langle \sqrt{\widetilde{\kappa _k}} \rangle _k}{\langle \frac{1}{\widetilde{\sigma }_k^2} \rangle _k}, \end{aligned}$$
where the bracket \(\langle \cdot \rangle _k\) denotes the mean value of variables of all the K filters.

To locate the tampered region, it’s necessary to analyze the consistency of noise variance across multiple regions of the image. Therefore, the global noise variance estimation needed to be extended to local estimation, which means computing the noise variance of overlapping blocks using the method above. Since the computing is aimed at all the overlapping blocks, the complexity of the algorithm is high. The integral image can be used to accelerate the computing process [11]. That is, we can compute the m-th order raw moment taking advantage of the integral image firstly. Then it is possible to use the initial definition to estimate the kurtosis and noise variance of each region. By now, the aim to get the local noise variance estimation has been realized.

Pan used the 2D DCT basis as the filter bank, and he pointed out that any other high-pass filters can be also chosen for this task, such as wavelet or Fast Independent Component Analysis (FastICA). In this paper, we use the DCT basis as the filter bank.

2.3 Analysis

The experiment shows that the algorithm can locate the spliced region efficiently in most case. However, (4) is applied to the ideal noise environment. In the practical application, the influence caused by the image content can’t be entirely eliminated. So the value of \(\widetilde{\kappa _k}\) and \(\widetilde{\sigma }_k^2\) got by some filtering channels will deviate from (4), which cause the incorrect consequence. These are called outliers. If there are many outliers in the image, the detection result will be influenced greatly. It may even cause false detection as the Fig. 1 shown:
Fig. 1.

Pan’s detection. (Top: Tampered images, Bottom: Pan’s results.)

3 Robust Noise Estimation Model

In this section, we first reformulate Pan’s method as a linear regression problem, and then we apply L1-norm to this problem to increase it’s robustness. The solution of our method and acceleration strategy is also proposed in this section.

3.1 Linear Regression Model for Noise Variance Estimation

From the second section, we can know that the critical step of the algorithm is to compute the solution of (5) to get the noise variance. We will analyze the property of (5) firstly, and then show our optimization method in the following passage. In order to be clearer, we rewrite (5) here:
$$\begin{aligned} \mathop {argmin} \limits _{\sqrt{\kappa }, \sigma ^2} \sum _{k=1}^{K} \left[ \sqrt{\widetilde{\kappa _k}} - \sqrt{\kappa } (\frac{ \widetilde{\sigma }_k^2 - \sigma ^2 }{ \widetilde{\sigma }_k^2 }) \right] ^2. \end{aligned}$$
Next, we reformulate the problem (5) simply by describing it’s variables in another form.
Let \(w_1 = \sqrt{\kappa }\), \(w_2 = \sigma ^2 \sqrt{\kappa }\), \(w = [w_1 \ w_2]^T\), \(x_k = [-1 \ 1/\widetilde{\sigma }_k^2]^T\), \(b = \sqrt{\widetilde{\kappa _k}}\), then substitute these new variables into (5). Because \(x_k\), b are known and w is unknown, we get the new objective function as follows:
$$\begin{aligned} \mathop {argmin} \limits _{w} F(w) = \sum _{k=1}^K (w^{T}x_k + b)^2. \end{aligned}$$
Now we can solve \(\sqrt{\kappa }\) and \(\sigma ^2\) in (5) by solving (7). Since the solution of (5) can be denoted as \(w^* = [w_1^*\ w_2^*]^T\), the solution of (5) can be solved as follows:
$$\begin{aligned} \sqrt{\kappa } = w_1^*,\ \sigma ^2 = w_2^* / w_1^*. \end{aligned}$$
Actually the problem shown by (7) refers to the relationship between \(w_1\) and \(w_2\), and it is obvious that (7) is a linear regression problem respecting to variable w. It is of significance that we build the equivalent relationship between (5) and a linear regression problem, which facilitates our further works on noise estimation model.
After reformulating it into a linear regression problem, we can get the detection algorithm with noise in more general sense. That is we can use a general form of the distance function \(D(w^T x, \sqrt{\widetilde{\kappa _k}})\) to estimate the solution, which means we can transfer the estimation problem to forms of regression problem with other distance metrics in (9):
$$\begin{aligned} \mathop {argmin} \limits _{w} \sum _{k=1}^K D(w^T x, \sqrt{\widetilde{\kappa _k}}). \end{aligned}$$
Moreover, (9) can use various distance metric function for better results, which is presented in the next subsection.

3.2 L1-Norm Based Noise Variance Estimation

As mentioned in the previous subsection, the original method proposed by Pan is actually based on L2 loss regression. However, L2 loss suffers from instability of outlies. Because we employ filters of different frequency bands, some of them may produce extraordinary high response caused by image contents in the local region. For these cases, \(\widetilde{\kappa _k}\), \(\widetilde{\sigma }_k^2\) deviate so much from the true value of noise distribution. We call these data outliers. L2 loss is sensitive to outliers. In other words, for a region, when outliers occur for few filters, the solution of (5) will greatly be influenced by them due to the properties of L2 loss. Since image contents are complex, it is usual that we should deal with such outliers for better estimation.

In order to improve the robustness of noise estimation, we replace the L2 loss in (7) with L1 loss. Comparing to L2 loss, the advantages of L1 loss is obvious. It uses absolute value instead of square value, which makes less compromise with abnormal error caused by these outliers in regression. The estimation model in our method is presented in (10):
$$\begin{aligned} \mathop {argmin} \limits _{w} F(w) = \sum _{k=1}^K | w^{T}x_k + b |, \end{aligned}$$
where w, x, b are defined the same as those in (7).

This problem is the linear least absolute regression problem. Unlike (7), (10) does not have a close form solution due to the L1 loss, thus we have to find an accurate approach to solve it. In this paper, we use linear programming to solve (10), this method were proposed in some literatures [12, 13], and we present it in the following message to make readers clearer.

In order to adopt linear programming method to solve (10), we introduce 2K auxiliary variables \(\xi _1^+\), \(\xi _1^-\), \(\xi _2^+\), \(\xi _2^-\), ..., \(\xi _K^+\), \(\xi _K^-\), and formulate the following linear programming problem:where \(\xi = \{ \xi _1^+, \xi _1^-, \xi _2^+, \xi _2^-, ..., \xi _k^+, \xi _k^- \}\).

In above problem, there are \(2K+2\) variables to be solved. The objective function and constraint condition are in form of linear, thus (11) can be solved by standard linear programming method.

It is crucial to prove that w in the solution of (11) equals to solution of (10). Assume \(\widehat{w}, \ \widehat{\xi _1^+},\ \widehat{\xi _1^-},\ \widehat{\xi _2^+}, \ \widehat{\xi _2^-}, ..., \widehat{\xi _K^+},\ \widehat{\xi _K^-}\) is the optimal solution of (11). Because of the constraint condition in (11), we have \(\widehat{w}^T x_k + b = \widehat{\xi _k^+} - \widehat{\xi _k^-}, \ \widehat{\xi _k^+} \ge 0, \ \widehat{\xi _k^-} \ge 0, \ k = 1,2,...,K\). Next we prove that for any k, at most only one of \(\widehat{\xi _k^+}\) and \(\widehat{\xi _k^-}\) is larger than 0, and the other is zero, which implies \(\widehat{\xi _k^+} + \widehat{\xi _k^-}\) equals to the absolute value of \(\widehat{w}^T x_k + b\). That is:
$$\begin{aligned} if\ \widehat{w}^T x_k + b > 0 \ then \ \widehat{w}^T x_k + b = \widehat{\xi _k^+}, \end{aligned}$$
$$\begin{aligned} if\ \widehat{w}^T x_k + b < 0 \ then \ \widehat{w}^T x_k + b = -\widehat{\xi _k^+}. \end{aligned}$$
Otherwise, if there is l that \(\widehat{\xi _l^+}> 0, \ \widehat{\xi _l^-} > 0\), we can construct another feasible solution \(\widetilde{w}, \ \widetilde{\xi }\) for problem as (14) shows:
$$\begin{aligned} \begin{array}{c} \widetilde{w} = \widehat{w}, \\ \widetilde{\xi _1^+} = \widehat{\xi _1^+}, \ \widetilde{\xi _1^-} = \widehat{\xi _1^-},\\ \widetilde{\xi _2^+} = \widehat{\xi _2^+}, \ \widetilde{\xi _2^-} = \widehat{\xi _2^-},\\ \vdots \\ \widetilde{\xi _{l-1}^+} = \widehat{\xi _{l-1}^+}, \ \widetilde{\xi _{l-1}^-} = \widehat{\xi _{l-1}^-},\\ \widetilde{\xi _l^+} = \max {(0, \widehat{\xi _l^+} - \widehat{\xi _l^-})},\widetilde{\xi _l^-} = \min {(0, \widehat{\xi _l^-} - \widehat{\xi _l^+})},\\ \vdots \\ \widetilde{\xi _K^+} = \widehat{\xi _K^+}, \ \widetilde{\xi _K^-} = \widehat{\xi _K^-}. \end{array} \end{aligned}$$
From above construction we know that \(\widetilde{\xi _l^+} - \widetilde{\xi _l^-} = \widehat{\xi _l^-} - \widehat{\xi _l^+}, \ \widetilde{\xi _l^+}> 0, \ \widetilde{\xi _l^-} > 0\) and only one of \(\widetilde{\xi _l^+}\), \(\widetilde{\xi _l^-}\) can be positive to ensure \(\widetilde{w}, \ \widetilde{\xi }\) is a feasible solution. However, \(\widetilde{\xi _l^+} + \widetilde{\xi _l^-} < \widehat{\xi _l^+} + \widehat{\xi _l^-}\) and consequently:
$$\begin{aligned} \sum _{k=1}^{K}(\widetilde{\xi _k^+} + \widetilde{\xi _k^-}) = \sum _{k,k\ne l}(\widehat{\xi _k^+} + \widehat{\xi _k^-})+\widetilde{\xi _l^+} + \widetilde{\xi _l^-} < \sum _{k=1}^{K}(\widehat{\xi _k^+} + \widehat{\xi _k^-}). \end{aligned}$$
Equation (15) contradicts with the assumption that \(\widehat{w}, \ \widehat{\xi }\) is the optimal solution of the problem. Thus, it can be concluded that the solution with \(\widehat{\xi _l^+}> 0,\ \widehat{\xi _l^-}> 0\) cannot be the optimal solution.

3.3 Acceleration Strategy for Consecutive Blocks

The proposed approach is to solve the linear programming problem of each overlapping blocks, whereas there is no simple analytical solution of this problem. Thus, it demands to solve the problem by iterative computation. To solve the problem that computational complexity in iterative process is high, we design a strategy to accelerate it. We notice that the adjacent blocks are different only in one row or one column of the block. For example, if the block is \(32 \times 32\), there will be \(31/32 = 96.8\%\) overlapping pixels for two adjacent blocks. Therefore, the noise variance between them is highly similar when they both belong to tampered regions or authentic regions. Therefore, we can sequentially estimate the noise variance of blocks in the row scan manner, and accelerate the iterative process by setting the noise variance of the previous block as the initial value of the current block. This operation fully utilizes the previous result from adjacent block to reduce the iterations in linear programming.

3.4 Procedure of Our Method

The process of image forensics algorithm based on L1-norm estimation is as follows:
  1. 1.

    Using 64 AC filters from DCT decomposition to filter the image to be detected. Then we will get 63 high-pass channel images.

  2. 2.

    Dividing each high-pass channel image with overlapping windows of \(32 \times 32\) pixels. Then computing the raw moments of first to forth order and computing the corresponding integral image. After this step, we will get 4 integral images for each high-pass channel image.

  3. 3.

    According to the kurtosis and noise variance defined by the raw moments, computing the kurtosis and noise variance of all overlapping blocks for each high-pass channel images.

  4. 4.

    Computing the kurtosis and noise variance of all channels, then using (10) and (11) to compute the kurtosis and noise variance of candidate image in all channels. This step can be accelerated by using the strategy mentioned above. Finally, we will get an estimated noise variance image and a kurtosis image.

  5. 5.

    Using K-means to cluster pixels by their variance value, which will segment the image into authentic regions and the forged regions.

The process is shown by Fig. 2:
Fig. 2.

Our algorithm process.

4 Experiment Result and Analysis

To test our approach, we use two data sources: the Columbia uncompressed image splicing detection evaluation dataset and the CASIA 2.0 spliced image library from Institute of Automation, Chinese Academy of Science. The Columbia’s dataset consists of 183 real images and 180 spliced images [14]. The forged image spliced by two original images are stored in uncompressed TIF format and the image size ranges from \(757 \times 568\) to \(1152 \times 768\) pixels. There are many different scenarios in the CASIA 2.0. Some of spliced images are pretty hard to be distinguished by people, which close to the spliced images in the real world. The image format of this database includes JPG and TIF. We use these two databases in the experiments so that we can verify the effectiveness and adaptability of our algorithm in a more comprehensive way.

Figure 3 is the detection result of Pan’s algorithm compared to ours which performed on the uncompressed images with TIF format over Columbia’s dataset.
Fig. 3.

Detection results on Columbia’s dataset. (Top: Tampered images. Middle: Pan’s results. Bottom: Our results.)

From these results, compared to Pan’s algorithm, our approach reduces the influence caused by outliers in the background. For example, in the first image of Fig. 3, our approach is more robust and can detect the tampered area more accurately since the impact of the wall has been removed mostly.

Figure 4 shows the comparison result of Pan’s method and ours in CASIA 2.0. These images are randomly chosen from CASIA 2.0, and they were converted into JPG format from TIF. From the result, it can be seen that even though the tampered images are manipulated pretty carefully which cannot be identified, our algorithm can still expose the forged area effectively. The result also indicates that our algorithm not only behaves better for spatial images such as TIF format, but also performs well in JPG image. Thus, it can be widely used. However, for the image with large smooth areas, such as grass in the fourth image of Fig. 4, the detection result of our algorithm will also be influenced slightly and may wrongly classify the smooth area into the tampered area. This is mainly due to the noise distribution in the smooth area is different largely from other areas.
Fig. 4.

Detection results on CASIA 2.0. (Top: Tampered images. Middle: Pan’s results. Bottom: Our results.)

Table 1 is the evaluation of two algorithms with the True Positive Rate (TPR) and Probability of False Acceptance (FPA). The value in the table is the average value of multiple experiments’ results performed on the Columbia’s dataset and CASIA 2.0.
Table 1.

The performance comparison between Pan’s and Ours.


Ours TPR

Pan’s TPR

Ours FPA

Pan’s FPA

Columbia’s dataset










5 Conclusion and Future Works

To improve the low robustness of the typical image forensics with the noise variance, we propose a novel image forensics approach that based on L1-norm estimation. By transforming noise estimation model to the linear regression form and applying L1-norm to it, our algorithm increases the robustness efficiently. Besides, we also propose an acceleration algorithm that takes advantage of the similarity of adjacent blocks to make it more applicable in real applications. According to analytics and experiments, our approach can effectively solve the inaccurate locating problem caused by outliers.

The future work about forgery detection based on noise estimation includes:
  1. 1.

    Extending the algorithm to GPU acceleration, which will increase the efficiency greatly.

  2. 2.

    Incorporating image segmentation methods to forgery detection for better robustness. For example, when there are many smooth regions in the image, the exposed tampered regions may deviate. It’s necessary for us to design more effective algorithm to solve this problem.

  3. 3.

    Using other robust estimation function for computing the noise to improve the true detection rate furtherly.




This work was supported by the NSFC under U1536105 and 61303259, National Key Technology R&D Program under 2014BAH41B01, Strategic Priority Research Program of CAS under XDA06030600, and Key Project of Institute of Information Engineering, CAS, under Y5Z0131201.


  1. 1.
    Dirik, A.E., Memon, N.D.: Image tamper detection based on demosaicing artifacts. IEEE Trans. Image Process., 1497–1500 (2009)Google Scholar
  2. 2.
    Cao, H., Kot, A.C.: Manipulation detection on image patches using FusionBoost. IEEE Trans. Inf. Forensics Secur. 7(3), 992–1002 (2012)CrossRefGoogle Scholar
  3. 3.
    Johnson, M.K., Farid, H.: Exposing digital forgeries by detecting inconsistencies in lighting. In: Proceedings of the ACM 7th Workshop on Multimedia and Security, pp. 1–10 (2005)Google Scholar
  4. 4.
    Johnson, M, K., Farid, H.: Exposing digital forgeries through specular highlights on the eye. In: International Workshop on Information Hiding, pp. 311–325 (2007)Google Scholar
  5. 5.
    Gou, H., Swaminathan, A., Wu, M.: Intrinsic sensor noise features for forensic analysis on scanners and scanned images. IEEE Trans. Inf. Forensics Secur. 4(3), 476–491 (2009)CrossRefGoogle Scholar
  6. 6.
    Amer, A., Dubois, E.: Fast and reliable structure-oriented video noise estimation. IEEE Trans. Circ. Syst. Video Technol. 15(1), 113–118 (2005)CrossRefGoogle Scholar
  7. 7.
    Pan, X., Zhang, X., Lyu, S.: Exposing image splicing with inconsistent local noise variances. In: IEEE International Conference on Computational Photography, pp. 1–10 (2012)Google Scholar
  8. 8.
    Bethge, M.: Factorial coding of natural images: how effective are linear models in removing higher-order dependencies. J. Opt. Soc. Am. A 23(6), 1253–1268 (2006)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Lyu, S., Simoncelli, E.P.: Nonlinear extraction of independent components of natural images using radial Gaussianization. Neural Comput. 21(6), 1485–1519 (2009)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Zoran, D., Weiss, Y.: Scale invariance and noise in natural images. In: IEEE International Conference on Computer Vision, pp. 2209–2216 (2009)Google Scholar
  11. 11.
    Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vision 21 (2001)Google Scholar
  12. 12.
    Chen, X.: Least absolute linear regression. Appl. Stat. Manage. 5, 48 (1989)Google Scholar
  13. 13.
    Li, Z.: Introduction of least absolute deviation method. Bull. Maths 2, 40 (1992)Google Scholar
  14. 14.
    Hsu, Y.F., Chang, S.F.: Detecting image splicing using geometry invariants and camera characteristics consistency. In: IEEE International Conference on Multimedia and Expo, pp. 549–552 (2006)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Xin He
    • 1
    • 2
  • Qingxiao Guan
    • 1
    • 2
  • Yanfei Tong
    • 1
    • 2
  • Xianfeng Zhao
    • 1
    • 2
  • Haibo Yu
    • 1
    • 2
  1. 1.State Key Laboratory of Information SecurityInstitute of Information Engineering, Chinese Academy of SciencesBeijingChina
  2. 2.University of Chinese Academy of SciencesBeijingChina

Personalised recommendations