1 Introduction

The popularity of photography is obvious. Lots of people take photos with their smartphones or cameras and immediately share them in social media. It is clear that tracing cameras may expose users’ privacy to a serious threat. Problem of tracing digital cameras has been popular for a long time. Tracing a camera is understood as a recognition of camera’s sensor. In [11] process of searching for characteristic features for identifying the camera is called hardwaremetry. According to [13, 31] each camera leaves some specific and unique traits in the images that makes it possible to trace the camera and could serve as a “camera fingerprint”. Such fingerprint is most often understood as pixel artifacts resulting of sensor imperfections or defects of optics. One of the classic and state-of-the-art algorithms for camera recognition was presented by Lukás et al. in [31]. For each camera it is determined the so-called sensor pattern noise that acts as an unique camera fingerprint. This sensor pattern noise is calculated by averaging noise obtained from multiple camera’s images using a denoise filter. According to the authors, efficiency of the algorithm depends on denoising filter used for processing the images and experimental evaluation showed that wavelet-based denoising filter achieves best results. Such approach is very effective and the recognition of the camera is at very high level. However, using the wavelet-based denoising filter is very time consuming [30]. Typical time for denoising 12 megapixel photo takes about two minutes; time for denoising 24 megapixel photo takes three to even four minutes. It is clear that this issue makes usage of this approach for large sets of images impractical. Therefore, there is a motivation to consider methods that will be even less accurate, but much faster.

In this paper we discuss quite different approach in order to recognize the camera. We are not considering afore mentioned noise that can be extracted from images but typical defects of optical system, like lens distortion and vignetting.

Vignetting is a kind of fault that occurs due to optical defects or sensor imperfections [10, 34]. It reveals as a reduction of image brightness at the edges of the image. Therefore, vignetted images have visually darker corners than rest of an image. Such defect is popular in lots of digital cameras (especially compacts and digital single lens reflectives). Types of vignetting are precisely described in [10]. This topic has attracted many researchers, therefore a lot of algorithms [6, 22,23,24] and patents [27] for vignetting correction are developed. Lens distortion is a deviation from rectilinear projection [5, 32]. This phenomenon causes straight lines in the picture to become curved. It is seen in the images as differences in magnification of the image depending on its distance from the optical axis [14].

We analyze these defects in order to recognize camera’s brand. The goal is to reduce the time for processing the image in comparison to Lukás et al.’s method. To the best of our knowledge, nobody before has tried to use vignetting or lens distortion for tracing the camera.

1.1 Contribution

Contribution of this paper is twofold. First, we analyze the vignetting defect in order to identify brand of the camera. We examine reduction of photo brightness at the edges of a set of images with the same frame perspective. We experimentally show that there are some tendences of underexposed areas in the image edges that might be helpful to recognize camera’s brand or even camera’s model. Secondly, we show that analyzing image distortion can also be used for distinguish two cameras. Proposed observations are less efficient in camera’s brand recognition compared with Lukás et al.’s algorithm [31] but significantly faster, what makes it practical to use them for processing large sets of images.

1.2 Organization of the paper

The paper is organized as follows. In Section 2 related and previous work is described. Section 3 describes the analysis of vignetting and lens distortion. In Section 4 experimental evaluation is presented. Final section concludes this work and points some future research directions.

2 Related and previous work

The issue of tracing cameras is studied in various ways. One of the most popular and considered as state-of-the-art work on camera sensor recognition is Lukás et al.’s algorithm [31]. This algorithm is in detail described in I. Authors proposed an algorithm for calculating camera’s fingerprint which is based on a difference between image p and its denoised form F(p).

In [2] we have proposed a fast method for camera tracing based on Peak signal-to-noise ratio. Method is very fast in comparison with [31], however classification efficiency is lower.

In [1] authors proposed using the k-means algorithm for managing photo response non-uniformity (PRNU) patterns. Patterns are compared to each other using correlation and grouped by k-means algorithm. Therefore, similar patterns grouped over a cluster are considered as belonging to the same camera. Experiments were conducted for a database of 500 images. Images grouped within a cluster were with true positive rate (TPR) of 98% belonging to the particular camera. The idea of fingerprint clustering is also presented in [3].

In [20] analysis of JPEG compression is carried out. It is well-known that JPEG lossy compression generates a noise that impacts groups of pixels. However, authors show JPEG compression adds some specific artifacts to the final image and the exact implementation details may be used for identification.

In [17] an approach of counterfeiting characteristic features of the camera in order to produce an image “pretending” to be done by another camera is presented. The technique is described as photo-response nonuniformity fingeprint-copy attack. The goal is to obfuscate sensor pattern noise of a particular camera by “inserting” into it a sensor pattern noise of another camera. It is showed that this can be done by performing simple algebraic operations. Let us assume that \(\hat {K_{N}}\) is a camera fingerprint calculated of N images and J is a fingerprint of another camera whose we want to put into \(\hat {K_{N}}\). Then we have \(J' = J(1+\alpha \hat {K_{N}})\), where a >0 is a scalar defining fingerprint strength. Experimental results show that such “exchange” of camera fingerprints is very efficient, i.e. based on \(\hat {K_{N}}\) we can produce a counterfeit photo \(J^{\prime }\) pretending that camera. Nearly the same proposition is described in [35, 41]. This technique has a serious disadvantage, beacuse it may be impractical. It is required to have a representative image set of camera that we want to “exchange” a fingerprint with and affects the actual image (i.e., the stored information). Due to denoising images, this method is also very time consuming.

In [4, 26, 29] dead pixels, pixels traps, point/hot point defects and cluster defects were investigated in terms of camera recognition. Experimental results show that different cameras have a distinct pattern of defective pixels, hence in some cases, hot and dead pixels allow recognizing the sensor.

In [21] a similar method as in [31] is presented. A sensor fingerprint is considered as a white noise present on the images. Authors suggest using correlation to circular correlation standard as a test statistic, which may reduce false positive rate (FPR) of camera recognition. However in contrast to [31], authors examine proposed method on fragments of photos instead of “full” photos. The true positive rate of recognition was 95% (fragments of size 256x256px) and 99% (512x512px).

In [13] a technique based on cross-correlation analysis and peak-to-correlation-energy (PCE) ratio to identify the camera is proposed. Sensor pattern noise is calculated and the correlation detector with PCE ratio to measure the similarity between noise residuals is used. However, time performance is not examined.

In [18] a method for camera identification using correlation is presented. Authors consider existing database with different cameras’ fingerprints and calculate correlation coefficient of a fingerprint of a new camera for comparison. This approach is clearly based on Lukás et al.’s algorithm, moreover authors do not describe, how database of fingerprints is gained.

In [39, 40] a gradient technique for vignetting correction is described. It is also pointed that vignetting can be described by natural image statistics. In [23,24,25] polynomial models for vignetting correction are proposed.

In [32] two methods for calculating lens distortion in order to camera calibration are presented. First method is based on look-up-tables (LUT) of focal length and lens distortion. Second method uses relationships between some feature points found in image. Calculation of lens distortion is done by algebraic operations.

Work [28] presents a method for camera calibration and radial distortion correction. Radial distortion can be calculated by two distorted images. Advantage of this method is that no knowledge about camera’s intristic paramters nor scene structure is required.

In [42] an innovative technique using Game Theory approach for identifying digital camera is discussed. The aim is to detect fingerprint-copy attack, when an adversary uses copy of original camera’s fingerprint. Therefore, this problem is represented as an interplay between sensor-based camera identification and the fingerprint-copy attack. A Bayesian game is used for analyzing differences between original camera’s fingerprint and the fingerprint-copy. The Nash equilibrium is used to evaluate the efficacy of proposed method.

In [33] there is proposed an approach for identifying a camera with the use of enhanced Poissonian-Gaussian model. This model describes distribution of pixels in a RAW image. Cameras’ fingerprints are represented as parameters of a statistical noise of considered model. Experiments are conducted with Dresden Image Database and also authors’ own image set.

In [37] problem of image splicing is considered. It is assumed that image splicing can be detected by analyzing noise level in spliced parts. It turns that splicing parts have different level of noise, what causes noise inconsistencies between them. This in turn allow for detecting the activity of splicing. A noise level function (NLF) is used. Experimental evaluation confirms efficacy of the NLF estimation. Work [37] is an extension of [33], where inconsistencies of images spliced regions are examined. Due to limitations of standard solution for estimating noise variance of each region, authors propose to use scoring strategy. An image is divided into small patches and the noise variance is calculated by kurtosis concentration-based pixel-level noise estimation method. Then, a sample of the noise variance and a inhomogeneity score of each region is fitted by a linear function. Experimental results confirmed efficacy of proposed method.

In [38] the problem of managing a large database of camera fingerprints is considered. Cameras’ fingerprints are represented as matrices of the noise whose resolution is equal to camera’s produced image size. A brute-force searching a specified fingerprint in a large database of N fingerprints takes O(nN), where n is the number of pixels in each fingerprint. Therefore, the goal is to reduce this time. In considered paper a fast search algorithm is proposed. Algorithm extracts a digest of the query fingerprint of the 10,000 fingerprint values and approximately matches their positions with positions of pixels in the digests of all database fingerprints. In the worst case, the complexity of this algorithm still could be the same as the database size, but in practice it is much faster. Experiments showed that for two-megapixel fingerprints searching takes 0.2 of second. However, this approach has some serious limitations. Nowadays, cameras’ image sensors are much bigger than two-megapixels therefore search time will again increase. Secondly, this approach deals with searching the existing set of cameras’ fingerprints, Therefore it is required to have such fingerprint set. This is still not practical due to big sizes of recent cameras sensors, where for example for 24-megapixel sensor, the fingerprint still must be a matrix with a corresponding size.

In [16] a method for camera identification is discussed. Calculating camera’s fingerprint is in the similar spirit as in [31]. Evaluation is made using the Peak to Correlation Energy ratio (PCE). Experiments are very representative, images are utilized from the popular on-line image sharing site Flickr. Tests included more than million images of 6896 cameras covering 150 models. In [15] the same problem is solved by using support vector machines (SVM) with decision fusion techniques.

3 Analysis of vignetting and lens distortion

In this section we consider artifacts of vignetting and lens distortion in order to recognize camera’s brand. In both cases an image is represented according to the \(\mathcal {R}\mathcal {G}{B}\) model as M × N by 3 data array that defines red, green and blue color components for each individual pixel.

3.1 Vignetting-CT algorithm

As mentioned in the Introduction, vignetting is a defect depending on reduction of brigthness at the image frame, usually in its corners. The easiest way to observe the presence of vignetting is a photo of plain surface (Fig. 1).

Fig. 1
figure 1

Example of image of a plain surface. The pixel intensity in the middle of the image is 94; average pixel intensities in corners a1, a2, a3, a4 are respectively: 74,72,84,85

We propose a procedure called Vignetting-Camera Tracing (Vignetting-CT) for calculating the differences of pixel intensities in image corners (Algorithm 1 ).

figure b

Vignetting-CT algorithm is very simple and comes to divide an image into four “small” parts at its corners and to calculate mean values of pixel intensities expressed in \(\mathcal {RGB}\) notation. This algorithm can be performed for any color channel, but we propose to process red (\(\mathcal {R}\)) color channel (line 1). Next step is median filtering of this channel (line 2) and calculating the residuumS defined as an absolute difference of the color channel and its median-filtered form (line 3). The d parameter defines size of image parts that are to be analyzed (lines 4, 5). We propose to use d = 0.05. Then, mean values of pixel intensities in image corners are calculated (lines 6-9). Finally, we compute mean value of pixel intenisties \(\hat {d}\) in corners of residuum S (line 10). We propose the \(\hat {d}\) value to use as a fingerprint for recognizing camera’s brand. Sample division of image frame is presented in Fig. 1.

Worth noticing that operating on S reveals the vignetting even, if image is not blank. We have inspected in detail pixel intensities of residuum S and it turns that in most images average pixel intensity of whole S is brighter than for considered corners. Sample image is presented in Fig. 2. Sample values for some photos from Nikon D70s (1) are presented in Table 1.

Fig. 2
figure 2

Sample image and its residuum S calculated as absolute difference of an image and its median-filtered version. Average value of pixel intensities in S is 0.6, intensities in corners in S: \(\hat {s_{a_{1}}} = 0.36\), \(\hat {s_{a_{2}}} = 0.49\), \(\hat {s_{a_{3}}} = 0.34\), \(\hat {s_{a_{4}}} = 0.54\)

Table 1 Sample vignetting values for Nikon D70s (1) in residuum S

We propose to use median filter instead of wavelet-based denoising filter that is mainly used for denoising images [18, 21, 31]. Median filtering is noticeably faster than wavelet-based denoising. Vignetting-CT algorithm is computationally effective, its complexity is O(mn), where m and n define the size of image parts for calculating pixel intensities. Due to small values of m and n, calculations are carried out very quickly.

3.2 Lens distortion

Distortion is a kind of defect of changing geometry of the image transmitted by lens to camera’s photosensitive sensor, especially closer to the corners and edges of the image frame. Essential for calculating distortion is the use of calibration grid which consists of crossing vertical and horizontal lines [5]. Such grid is used to compare it with lines in the image. If the lines in the image cover with the grid, there is no distortion in the photo. Otherwise, there is a barrel or pincushion distortion. Barrel distortion appears when image magnification decreases with distance from the optical axis. It often appears in pictures taken with wide-angle lenses. Image elements look as they were bent outside the image. Similarly, in pincushion distortion image magnification increases with distance to the optical axis. An image seems to be squeezed inside the center of the frame. A sample photo with visible barrel distortion is presented in Fig. 3.

Fig. 3
figure 3

Image with distortion and the grid of horizontal and vertical lines. Red lines denote distorted points

In literature there are many models that describe lens distortion. Most common are polynomial models [7, 14]. We propose a model defined as (1). This model is a simplified version of Brown’s model [7].

$$ p_{u} = p_{d}(1+kr^{2}) $$
(1)

where:

  • pu(xu, yu) – undistorted image point;

  • pd(xd, yd) – distorted image point;

  • k – distortion parameter;

  • \(r = \sqrt {(x_{d} - x_{u})^{2}+(y_{d} - y_{u})^{2}}\).

We propose to calculate the distortion parameter k for a set of images of different cameras and check if there are some tendences that might be helpful to be used in order to recognize a specific camera. The pu and pd can be determined by using software (for example Hugin Photo Stitcher [19]) or manually. Knowing distorted and undistorted points, this procedure leads for solving (1), where k is unknown. Of course, for a set of distorted and corresponding undistorted points there will be the same number of k values that might be different. In such case we propose to average of all k values.

Let us consider a simple toy example in which we show the reasoning of calculating the value of distortion.

Example 1 (Toy example)

Suppose that we have coordinates of the following set of distorted pd and corresponding undistorted pu points: \(p_{d_{1}} = (100,80)\), \(p_{u_{1}} = (110,90)\)\(p_{d_{2}} = (210,100)\), \(p_{u_{2}} = (200,110)\)\(p_{d_{3}} = (215,177)\), \(p_{u_{3}} = (217,184)\) We calculate r1, r2 and r3. \(r_{1} = \sqrt {(x_{d_{1}} - x_{u_{1}})^{2}+(y_{d_{1}} - y_{u_{1}})^{2}} = \sqrt {(100-110)^{2}+(80-90)^{2}} = 14.14\) Similarly, r2 = 14.14 and r3 = 7.28 Inserting into (1) and calculating the k values for \(p_{d_{1}}\) and \(p_{u_{1}}\): 100 = 110(1 + k · 14.14) k1(1) = ––0.09 80 = 90(1 + k · 14.14) k1(2) = 0.07 Similarly, for next pair of points \(p_{d_{2}}\), \(p_{u_{2}}\) and \(p_{d_{3}}\), \(p_{u_{3}}\)k2 = –0.003 and 0.007; k3 = 0.001 and –0.005 Finally, we calculate the mean value of all k values: \(k = \frac {-0.09+0.07+(-0.003)+0.007+0.001+(-0.005)}{6} = -0.003\) Thus, the k = –0.003 is the searched parameter.

Above procedure should be performed for all images separately. We propose to consider the k parameter as unique that might be used to distinguish cameras. Due to simplicity, proposed procedure is fast and can be easily implemented.

4 Experimental verification

In this section we compare results of recognizing cameras’ brands by analyzing \(\hat {d}\) value of Vignetting-CT and k distortion parameter with Lukás et al.’s algorithm [31] in terms of efficiency of classification and time performance. Details of Lukás et al.’s algorithm are recalled in the I. For evaluation of this algorithm, we use original authors’ MATLAB implementation [30]. Both Vignetting-CT algorithm and script for calculating distortion parameter k are implemented in MATLAB.

We use the accuracy (ACC), defined in the standard way as an evaluation statistic:

$$ \text{ACC} = \frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}}~, $$

where TP/TN denotes “true positive/true negative”; FP/FN stands for “false positive/false negative”. TP denotes number of cases correctly classified to a specific class; TN are instances that are correctly rejected. FP denotes cases incorrectly classified to the specific class; FN are cases incorrectly rejected.

4.1 Devices

Experiments are conducted on two datasets. First dataset contains images from popular smartphones (further called “smartphones dataset”). We have used 264 JPEG images from 12 smartphones. Used smartphones include: Apple iPhone 6, Asus ZenFone 2, HTC One M9, Huawei P8, LG G3, LG G4, Lumia 1020, Lumia 1520, Samsung Galaxy Note 4, Samsung Galaxy S6, Sony Xperia Z3 and Sony Xperia Z3+. All devices contain CMOS sensors. Second dataset include images from the Dresden Image Database [9]. This database consists of tens of thousands images made by different cameras and is often used for research [8, 12]. We have used 11787 JPEG images of 48 cameras. Used cameras include: Agfa DC 733s, Agfa DC 830i, Agfa Sensor 505, Agfa Sensor 530s, Canon Ixus 55, Canon Ixus 70 (3 devices), Casio EX Z150 (5 devices), Kodak M1063 (5 devices), Nikon CoolPix S710 (5 devices), Nikon D70 (2 devices), Nikon D70s (2 devices), Nikon D200 (2 devices), Olympus 1050SW (5 devices), Praktica DCZ5 (5 devices), Rollei RCP 7325XS (3 devices), Samsung L74 (3 devices) and Samsung NV15 (3 devices). In most cases, images were taken of the same image frames by different devices. All cameras in this dataset contain CCD sensors. In cases of both datasets all images are JPEG lossy compressed and come directly from cameras. We do not assume further processing of images, for example user’s graphic processing.

4.2 Experiment I – brand identification by analyzing vignetting

We analyze influcence of underexposed areas in the corners of images for recognizing the camera. Experiments are performed in the following way. The \(\hat {d}\) value is calculated for every image from each camera and mean value of \(\hat {d}\) of images from a specific camera is calculated. To classify a new image K, its \(\hat {d}_{K}\) is calculated. Obtained result is assigned to the closest mean \(\hat {d}\) from a specific camera and assumed as taken by this camera.

For smartphones dataset the \(\hat {d}\) value has been calculated for 22 images per device. Results of classification are presented in Tables 2345.

Table 2 Confusion matrix of model recognition, Vignetting-CT, smartphones dataset, ACC = 0.62
Table 3 Confusion matrix of brand recognition, Vignetting-CT, smartphones dataset, ACC = 0.72
Table 4 Confusion matrix of model recognition, Lukás et al.’s algorithm, smartphones dataset, ACC = 0.84
Table 5 Confusion matrix of brand recognition, Lukás et al.’s algorithm, smartphones dataset, ACC = 0.84

For Dresden Image Database the \(\hat {d}\) value has been calculated for at least 180 images per device. Results of brand classification are presented in Tables 6 and 7. For the clarity, we do not present results for model classification, due to the amount of 48 cameras.

Table 6 Confusion matrix of brand recognition, Vignetting-CT, Dresden Image Database, ACC = 0.52
Table 7 Confusion matrix of brand recognition, Lukás et al., Dresden Image Database, ACC = 0.83

Results presented in confusion matrices point that camera’s brand classification is noticeably higher for Lukás et al.’s algorithm. In cases of two tested datasets the accuracy of brand classification is 84% for smartphones dataset and 83% for Dresden Image Database. Efficiency of Vignetting-CT algorithm is 72% for smartphones dataset and 52% for Dresden Image Database. Advantage of Lukás et al.’s algorithm is classification “stability”, what is especially visible in confusion matrices of brand recognition.

It is worth saying that Lukás et al.’s algorithm achieves better performance in case of older devices with CCD sensors. In [2] it was shown that performance of this algorithm is lower for newer cameras with CMOS sensors. Nowadays, recent cameras have CMOS sensors instead of CCDs.

4.3 Time performance

Data presented in Table 8 and Fig. 4 clearly indicate that Lukás et al.’s algorithm is defeated in terms of time of image processing. Vignetting-CT algorithm processes images in real time, while Lukás et al.’s takes at average about 90 seconds to process a single image. Of course, image processing time is dependent on the image resolution. Lower resolution images (e.g. 6 megapixels 3000x2000px) are processed with less than 1 minute, however, 24 megapixels images of resolution 6000x4000px are processed about 4 minutes. To sum up, the whole time for processing 12051 images took less than 40 minutes in case of Vignetting-CT algorithm and about 294 hours for Lukás et al.’s algorithm. Such poor time performance excludes usage of Lukás’ algorithm for a mass acale. In [36] it was examined if the image processing time Lukás’ algorithm could be decreased by processing small fragments of images. For this purpose 50×50 pixels fragments of photos were used, however the results of classification were not satisfactory.

Table 8 Processing time of smartphones dataset and Dresden Image Database (12051 photos in total)
Fig. 4
figure 4

Comparison of time performance of Lukás et al.’s algorithm and Vignetting-CT for all images (12051)

The experiments were conducted on MSI GV62-7RD notebook with quad-core Intel Core i5-7300HQ processor with 24GB of RAM. It is worth mentioning that camera’s fingerprint in Lukás et al.’s algorithm is stored as a matrix of size of camera’s images. Authors’ implementation produces the fingerprint files as MATLAB *.mat files which are usually of weight at least 110 megabytes. It means that calculated fingerprints for two used datasets of over 12 thousand of images weigh about 1.2 terabyte.

4.4 Experiment II – comparison of lens distortion

We analyze lens distortion parameter k for images from different devices but for the same image frames. Script for calculating k parameter was written in MATLAB, but it can also be used the Hugin Photo Stitcher software [19] that measures the distortion. We have compared distortion results from MATLAB script with Hugin software and obtained results are the same. Analysis shows that despite photographing the same scene, distortion parameter appears to be different for various devices. Sample results are presented in Figs. 5 and 6. It is worth mentioning that different devices of the same camera model give different values of k parameter. Such example is presented in Fig. 5, where the same image frame of two different smartphones (Huawei P8 and Samsung S6) generates different distortion parameters. A similar situation is presented in Fig. 6, for two different devices of Nikon CoolPix S710 (Dresden Image Database).

Fig. 5
figure 5

Lens distortion of images of the same frame, smartphones dataset. Image (1) (Huawei P8), distortion parameter k = –0.6782; image (2) (Samsung S6), distortion parameter k = –0.2

Fig. 6
figure 6

Lens distortion of images of the same frame, Dresden Image Database. Image (1) (Nikon CoolPix S710 (device #2)), distortion parameter k = –0.10323; image (2) (Nikon CoolPix S710 (device #0)), distortion parameter k = –0.01799

Sample results of k distortion parameter are presented in Tables 9 and 10. Due to large number of devices and images, we present for the clarity only part of full results. It is visible that all photos taken by various devices have different distortion parameters. Therefore proposed approach gives information if a set of images was done by one or more cameras, however it can not be used to determine of what model or even brand was used. What is more, an advantage of proposed method is its speed, because distortion parameter k is calculated in real time. Moreover, there are many applications that calculate and correct lens distortion in photos, thus there is no need to implement distortion algorithm manually. Of course proposed method can be useful in simple cases of comparing similar photos, however it may not be practical for sets of different images. One of the reason is that distortion parameter changes due to distance to the object, angle of view and focal length.

Table 9 Sample values of k lens distortion parameter of the same image frame for different sample devices, smartphones dataset
Table 10 Sample values of k lens distortion parameter of the same image frame for different sample devices, Dresden Image Database

4.5 Summary

We have analyzed influence of vignetting and lens distortion defects to the problem of digital camera recognition. Experiments show that analysis of vignetting can be used for brand recognition. Compared to Lukás et al.’s algorithm, model recognition is lower, however Vignetting-CT algorithm beats Lukás et al.’s algorithm in terms of speed. Efficiency of brand recognition on smartphones dataset and Dresden Image Database is 72% and 52%, respectively, while Lukás et al.’s algorithm achieves 84% and 83%. However, an advantage of Vignetting-CT algorithm is processing images in real time, while Lukás et al. takes at average about 90 seconds per photo. Therefore, the Lukás et al.’s algorithm needed more than 290 hours to calculate cameras fingerprints, what excludes this algorithm of usage for a mass scale.

We have also experimentally shown that analysis of lens distortion can be useful to distinguish if a set of images of the same frame was taken by the same camera. Photos of the same frame but made with various cameras generate different distortion parameters. Of course, such approach may be not practical for camera recognition in case of images of different frames, however it can be useful for analysis of similar photos. The main disadvantage of distortion is its heterogenity. Distortion changes due to distance to the object, focal length or angle of view, therefore there is probably no possibility to propose reasonable model that could be used for tracing cameras for a mass scale. Another limitation is that if there are no straight lines in the picture, distortion can not be determined.

5 Conclusion and future work

In this paper the problem of recognizing digital cameras by photographs was examined. Most popular solutions are based on denoising images with wavelet-based denoising filter and calculating so-called sensor pattern noise which averaged gives camera’s fingerprint. We have proposed a novel approach for tracing digital cameras by analysis of vignetting and distortion defects. We have compared obtained results with state-of-the-art Lukás et al.’s algorithm. Experiments have shown that despite lower efficiency it is possible to recognize brand of the camera by analysis of vignetting defect. Our approach defeats the Lukás et al.’s algorithm in terms of image processing time. Proposed Vignetting-CT algorithm processes images in real time, while Lukás et al.’s algorithm needs at average 90 seconds to calculate the sensor pattern noise of one photograph. Moreover, Vignetting-CT algorithm does not require calculating cameras’ fingerprints what is very time consuming. Analysis of distortion showed that images of different devices (also of the same model) generate different distortion parameters. Therefore, there is a possibility to distinguish if photos were taken by the same camera. This method is useful for a set of similar images and is very fast as well calculating distortion parameter is performed in real time.

Future works will concern further experiments with lens distortion analysis. It should be examined if it is possible to propose a model of distortion that could be used for reliable and more universal camera recognition. It would even be interesting to check if other optical defects like chromatic or spherical aberration would be useful to trace the camera’s brand. Moreover, we are going to check efficiency of Vignetting-CT algorithm with classifiers based on Deep Learning or Convolutional Neural Networks approaches.