1 Introduction

While GANs can extract useful representations from data , translate textual descriptions into images , transfer scene and style information between images or detect objects (Gui et al., 2022) , they can also enable abusive actors to quickly and easily generate potentially damaging, highly realistic fake images, colloquially called deepfakes (Parkin, 2012; Frank et al., 2020). As the internet becomes a more prominent virtual public space for political discourse and social media outlet (Applebaum, 2020), deepfakes present a looming threat to its integrity that must be met with techniques for differentiating the real and trustworthy from the fake.

Previous algorithmic techniques for separating real from computer-hallucinated images of people have relied on identifying fingerprints in either the spatial (Yu et al., 2019; Marra et al., 2019) or frequency (Frank et al., 2020; Dzanic et al., 2020) domain. However, to the best of our knowledge, no techniques have jointly considered the two domains in a multi-scale fashion, for which we propose to employ wavelet-packet coefficients representing a spatio-frequency representation.

In this paper, we make the following contributions:

  • We present a wavelet-packet-based analysis of GAN-generated images. Compared to existing work in the frequency domain, we examine the spatio-frequency properties of GAN-generated content for the first time. We find differences between real and synthetic images in both the wavelet-packet mean and standard deviation, with increasing frequency and at the edges. The differences in mean and standard deviation also holds between different sources of synthetic images.

  • To the best of our knowledge, we present the first application and implementation of boundary wavelets for image analysis in the deep learning context. Proper implementation of boundary-wavelet filters allows us to share identical network architectures across different wavelet lengths.

  • As a result of the aforementioned wavelet-packet-based analysis, we build classifiers to identify image sources. We work with fixed seed values for reproducibility and report mean and standard deviations over five runs whenever possible. Our systematic analysis shows improved or competitive performance.

  • Integrating existing Fourier-based methods, we introduce fusion networks combining both approaches. Our best networks outperform previous purely Fourier or pixel-based approaches on the CelebA and Lsun bedrooms benchmarks. Both benchmarks have been previously studied in Yu et al. (2019) and Frank et al. (2020).

We believe our virtual public spaces and social media outlets will benefit from a growing, diverse toolbox of techniques enabling automatic detection of GAN-generated content. This paper presents a wavelet-packet-based approach as a competitive and interpretable addition. The source code for our wavelet toolbox, the first publicly available toolbox for fast boundary-wavelet transforms in the python world, and our experiments is available at https://github.com/v0lta/PyTorch-Wavelet-Toolbox and https://github.com/gan-police/frequency-forensics.

2 Motivation

“Wavelets are localized waves. Instead of oscillating forever, they drop to zero.”Strang and Nguyen (1996). This important observation explains why wavelets allow us to conserve some spatio-temporal relations. The Fourier transform uses sine and cosine waves, which never stop fluctuating. Consequently, Fourier delivers an excellent frequency resolution while losing all spatio-temporal information. We want to learn more about the spatial organization of differences in the frequency domain, which is our first motivation to work with wavelets.

The second part of our motivation lies in the representation power of wavelets. Images often contain sharp borders where pixel values change rapidly. Monochromatic areas are also common, where the signal barely changes at all. Sharp borders or steps are hard to represent in the Fourier space. As the sine and cosine waves do not fit well, we observe the Gibbs-phenomenon (Strang & Nguyen, 1996) as high-frequency coefficients pollute the spectrum. For a dull, flat region, Fourier requires an infinite sum (Van den Berg, 2004). The Fourier transform is uneconomical in these two cases. Furthermore, the fast wavelet transform can be used with many different wavelets. Having a choice that allows us to explore and choose a basis that suits our needs is our second motivation to work with wavelets.

Before discussing the mechanics of the fast wavelet transform (FWT) and its packet variant, we present a proof of concept experiment in Fig. 1. We investigate the coefficients of the level 3 Haar wavelet packet transform. Computations use 5k \(1024 \times 1024\) pixel images from Flickr Faces High Quality (FFHQ) and 5k \(1024 \times 1024\) pixel images generated by StyleGAN.

Fig. 1
figure 1

This figure compares single time-domain images and absolute \(\ln\)-scaled mean wavelet packets and their standard deviations. The first column shows two example images, the second mean packets, the third standard deviations, and the fourth mean and standard deviation differences. Mean values and standard deviations have been computed on 5k \(1024 \times 1024\) FFHQ- and StyleGAN-generated images each. The order of the packets has the frequency increasing from the top left to the bottom right. We observe significant differences. The mean is significantly different for higher frequencies and at image edges. The standard deviations differ in the background across the entire spectrum. In the Appendix Fig. 22 shows the exact filter sequences for each packet. Best viewed in color

The leftmost columns of Fig. 1 show a sample from both FFHQ and a StyleGAN generated one. Mean wavelet packets appear in the second, and their standard deviation in the third column. Finally, the rightmost column plots the absolute mean packets and standard deviation differences. For improved visibility, we rescale the absolute values of the packet representation, averaged first over the three color bands, using the natural logarithm (ln).

We find that the mean wavelet packets of GAN-generated images are often significantly brighter. The differences become more pronounced as the frequency increases from the top left to the bottom right. The differences in high-frequency packets independently confirm Dzanic et al. (2020), who saw the most significant differences for high-frequency Fourier-coefficients.

The locality of the wavelet filters allows face-like shapes to appear, but the image edges stand out, allowing us to pinpoint the frequency disparities. We now have an intuition regarding the local origin of the differences. We note that Haar wavelet packets do not use padding or orthogonalization and conclude that the differences stem from GAN generation. For the standard deviations, the picture is reversed. Instead of the GAN the FFHQ-packets appear brighter. The StyleGAN packets do not deviate as much as the data in the original data set. The observation suggests that the GAN did not capture the complete variance of the original data. Our evidence indicates that GAN-generated backgrounds are of particular monotony across all frequency bands.

In the next sections, we survey related work and discuss how the fast wavelet transform (FWT) and wavelet packets in Fig. 1 are computed.

3 Related work

3.1 Generative adversarial networks

The advent of GANs  (Goodfellow et al., 2014) heralded several successful image generation projects. We highlight the contribution of the progressive growth technique, in which a small network is trained on smaller images then gradually increased in complexity/number of weights and image size, on the ability of optimization to converge at high resolutions of \(1024 \times 1024\) pixels (Karras et al., 2018). As a supplement, style transfer methods (Gatys et al., 2016) have been integrated into new style-based generators. The resulting style-GANs have increased the statistical variation of the hallucinated faces (Karras et al., 2019, 2020). Finally, regularization (e.g., using spectral methods) has increased the stability of the optimization process (Miyato et al., 2018). The recent advances have allowed large-scale training (Brock et al., 2019), which in turn improves the quality further.

Conditional methods allow to partially manipulate images, Antipov et al. (2017) for example proposes to use conditional GANs for face aging. Similarly, GANs allow the transfer of facial expressions (Deepfake-Faceswap-Contributors, 2018; Thies et al., 2019), an ability of particular importance from a detection point of view. For a recent exhaustive review on GAN theory, architecture, algorithms, and applications, we refer to Gui et al. (2022).

3.2 Diffusion probabilistic models

As an alternative class of approaches to GANs, recently, two closely related probabilistic generative models, diffusion (probabilistic) models and score-based generative models, respectively, were shown to produce high-quality images, see Ho et al. (2020); Dhariwal and Nichol (2021); Song et al. (2021) and their references. In short, images are generated by reversing in the sampling phase a gradual noising process used during training. Further, Dhariwal and Nichol (2021) proposes to use gradients from a classifier to guide a diffusion model during sampling, with the aim to trade-off diversity for fidelity.

3.3 Deepfake detection

Deepfake detectors broadly fall into two categories. The first group works in the frequency domain. Projects include the study of natural and deep network generated images in the frequency space created by the discrete cosine transform (DCT), as well as detectors based on Fourier features (Zhang et al., 2019; Durall et al., 2019; Dzanic et al., 2020; Frank et al., 2020; Durall et al., 2020; Giudice et al., 2021), these provide frequency space information, but the global nature of these bases means all spatial relations are missing. In particular, Frank et al. (2020) visualizes the DCT transformed images and identifies artifacts created by different upsampling methods. We learn that the transformed images are efficient classification features, which allow significant parameter reductions in GAN identification classifiers. Instead of relying on the DCT, Dzanic et al. (2020) studied the distribution of Fourier coefficients for real and GAN-generated images. After noting significant deviations of the mean frequencies for GAN generated- and real-images, classifiers are trained. Similarly, Giudice et al. (2021) studies statistics of the DCT coefficients and uses estimations for each GAN-engine for classification. Dzanic et al. (2020) found high-frequency discrepancies for GAN-generated imagery, built detectors, and spoofed the newly trained Fourier classifiers by manually adapting the coefficients in Fourier-space. Finally, He et al. (2021) combined 2d-FFT and DCT features and observed improved performance.

The second group of classifiers works in the spatial or pixel domain, among others, (Yu et al., 2019; Wang et al., 2020a, b; Zhao et al., 2021a, b) train (convolutional) neural networks directly on the raw images to identify various GAN architectures. Building upon pixel-CNN Wang and Deng (2021) adds neural attention. According to (Yu et al., 2019), the classifier features in the final layers constitute the fingerprint for each GAN and are interesting to visualize. Instead of relying on deep-learning, (Marra et al., 2019) proposed GAN-fingerprints working exclusively with denoising-filter and mean operations, whereas (Guarnera et al., 2020) computes convolutional traces.

Tang et al. (2021) uses a simple first-level discrete wavelet transform, this means no multi-scale representation as a preprocessing step to compute spectral correlations between the color bands of an image, which are then further processed to obtain features for a classifier. Younus and Hasan (2020) use a Haar-wavelet transform to study edge inconsistencies in manipulated images.

To the best of our knowledge, we are the first to work directly with a spatio-frequency wavelet packet representation in a multi-scale fashion. Our analysis goes beyond previous Fourier-based studies. Fourier loses all spatial information while wavelets preserve it. Compared to previous wavelet-based work, we also consider the packet approach and boundary wavelets. The packet approach allows us to decompose high-frequency bands in fine detail. Boundary wavelets enable us to work with higher-level wavelets, which the machine learning literature has largely ignored thus far. As we demonstrate in the next section, wavelet packets allow visualization of frequency information while, at the same time, preserving local information.

Previously proposed deepfake detectors had to detect fully (Frank et al., 2020) or partially (Rossler et al., 2019) faked images. In this paper we do both. Sections 5.1, 5.2 and 5.3 consider complete fakes and Sect. 5.4 studies detection of partial neural fakes. Additionally we examine images generated by guided diffusion (Dhariwal & Nichol, 2021) in Sect. 5.5. We demonstrate our ability to identify images from this source by training a forensic classifier. A recent survey on deepfake generation and detection can be found in Zhang (2022).

3.4 Wavelets

Originally developed within applied mathematics (Mallat, 1989; Daubechies, 1992), wavelets are a long-established part of the image analysis and processing toolbox (Taubman & Marcellin, 2002; Strang & Nguyen, 1996; Jensen & la Cour-Harbo, 2001). In the world of traditional image coding, wavelet-based codes have been developed to replace techniques based on the DCT (Taubman & Marcellin, 2002). Early applications of wavelets in machine learning studied the integration of wavelets into neural networks for function learning (Zhang et al., 1995). Within this line of work, wavelets have previously served as input features (Ll et al., 2015), or as early layers of scatter nets (Mallat, 2012; Cotter, 2020). Deeper inside neural networks, wavelets have appeared within pooling layers using static Haar (Wang et al., 2020c; Williams & Li, 2018) or adaptive wavelets (Wolter & Garcke, 2021; Ha et al., 2021). Within the subfield of generative networks, concurrent research (Gal et al., 2021) explored the use of Haar-wavelets to improve GAN generated image content. Bruna and Mallat (2013) and Oyallon and Mallat (2015) found that fixed wavelet filters in early convolution layers can lead to performance similar to trained neural networks. The resulting architectures are known as scatter nets. This paper falls into a similar line of work by building neural network classifiers on top of wavelet packets. We find this approach is particularly efficient in limited training data scenarios. Section 5.2.1 discusses this observation in detail.

The machine learning literature often limits itself to the Haar wavelet (Wang et al., 2020c; Williams & Li, 2018; Gal et al., 2021). Perhaps because it is padding-free and does not require boundary value treatment. The following sections will explain the FWT and how to work with longer wavelets effectively.

4 Methods

In this section, we briefly present the nuts and bolts of the fast wavelet transform and its packet variant. For multi-channel color images, we transform each color separately. For simplicity, we will only consider single channels in the following exposition.

4.1 The fast wavelet transform (FWT)

FWTs (Mallat, 1989; Daubechies, 1992; Strang & Nguyen, 1996; Jensen & la Cour-Harbo, 2001; Mallat, 2009) utilize convolutions to decompose an input signal into its frequency components. Repeated applications of the wavelet transform result in a multi-scale analysis. Convolution is here a linear operation and linear operations are often written in matrix form. Consequently, we aim to find a matrix that allows the computation of the fast wavelet transform (Strang & Nguyen, 1996):

$$\begin{aligned} \mathbf {b} = \mathbf {A}\mathbf {x}. \end{aligned}$$
(1)

\(\mathbf {A}\) is a product of multiple scale-matrices. The non-zero elements in \(\mathbf {A}\) are populated with the coefficients from the selected filter pair. Given the wavelet filter degree d, each filter has \(N = 2d\) coefficients. Repeating diagonals compute convolution operations with the so-called analysis filter vector pair \(\mathbf {f}_\mathcal {L}\) and \(\mathbf {f}_\mathcal {H}\), where the filters are arranged as vectors \(\in \mathbb {R}^N\). The subscripts \(\mathcal {L}\) denote the one-dimensional low-pass and \(\mathcal {H}\) the high pass filter, respectively. The filter pair appears within the diagonal patterns of the stride two convolution matrices \(\mathbf {H}_\mathcal {L}\) and \(\mathbf {H}_\mathcal {H}\). Overall one observes the pattern (Strang & Nguyen, 1996)

$$\begin{aligned} \mathbf {A}= \dots \begin{pmatrix} \begin{array}{c|c} \mathbf {H}_\mathcal {L} &{} \\ \mathbf {H}_\mathcal {H} &{} \\ \hline &{} \mathbf {I} \\ \end{array} \end{pmatrix} \begin{pmatrix} \mathbf {H}_\mathcal {L} \\ \mathbf {H}_\mathcal {H} \end{pmatrix}. \end{aligned}$$
(2)

The equation describes the first two FWT-matrices. Instead of the dots, we can imagine additional analysis matrices. The analysis matrix \(\mathbf {A}\) records all operations by matrix multiplication.

Fig. 2
figure 2

Truncated single-dimensional analysis convolution matrices for a signal of length 32 using for example a Daubechies-two wavelet. The decomposition level increases from right to left. On the leftmost side the product of the first three is shown. We are looking at truncated versions of the infinite matrices described by Eq. 2. The title reads \(\mathbf {C}_a\) because the analysis convolution matrix is different from the wavelet matrix, which we discuss further in Sect. 4.3 on boundary filters

In Fig. 2 we illustrate a level three transform, where we see Eq. 2 at work. Ideally, \(\mathbf {A}\) is infinitely large and orthogonal. For finite signals, the ideal matrices have to be truncated. \(\mathbf {C}\) denotes finite length untreated convolution matrices, subscript a and s mark analysis, and transposed synthesis convolutions. Second-degree wavelet coefficients from four filters populate the convolution matrices. The identity tail of the individual matrices grows as scales complete. The final convolution matrix is shown on the left. The section “Constructing the inverse fwt matrix” presents the construction of the synthesis matrices, which undo the analysis steps, in the appendix.

Common choices for 1D wavelets are the Daubechies-wavelets (“db”) and their less asymmetrical variant, the symlets (“sym”). We refer the reader to our section “Daubechies wavelets and symlets” in the appendix or Mallat (2009) for an in-depth discussion. Note that the FWT is invertible, to construct the synthesis matrix \(\mathbf {S}\) for \(\mathbf {S} \mathbf {A}~=~\mathbf {I}\), we require the synthesis filter pair \(\mathbf {f}_\mathcal {L}, \mathbf {f}_\mathcal {H}\). The filter coefficients populate transposed convolution matrices. The appended section “Constructing the inverse fwt matrix” discusses the details. To ensure the transform is invertible and visualizations interpretable, not just any filter will do. The perfect reconstruction and anti-aliasing conditions must hold. We briefly present these two in the appendix in our section titled “The perfect reconstruction and alias cancellation conditions”. We refer to Strang and Nguyen (1996) and Jensen and la Cour-Harbo (2001) for an excellent further discussion of these conditions.

4.2 The two-dimensional wavelet transform

To extend the single-dimensional case to two-dimensions, one dimensional wavelet pairs are often transformed. Outer products allow us to obtain two dimensional quadruples from single dimensional filter pairs (Vyas et al., 2018):

$$\begin{aligned} \mathbf {f}_{a}&= \mathbf {f}_\mathcal {L}\mathbf {f}_\mathcal {L}^T,&\mathbf {f}_{h}&= \mathbf {f}_\mathcal {L}\mathbf {f}_\mathcal {H}^T,&\mathbf {f}_{v}&= \mathbf {f}_\mathcal {H}\mathbf {f}_\mathcal {L}^T,&\mathbf {f}_{d}&= \mathbf {f}_\mathcal {H}\mathbf {f}_\mathcal {H}^T. \end{aligned}$$
(3)

In the equations above, \(\mathbf {f}\) denotes a filter vector. In the two-dimensional case, a denotes the approximation coefficients, h denotes the horizontal coefficients, v denotes vertical coefficients, and d denotes the diagonal coefficients. The 2D transformation at representation level \(q+1\) requires the input \(\mathbf {x}_q\) as well as a filter quadruple \(\mathbf {f}_{k}\) for \(k \in [a, h, v, d]\) and is computed by

$$\begin{aligned} \mathbf {x}_q*\mathbf {f}_k = \mathbf {k}_{q+1}, \end{aligned}$$
(4)

where \(*\) denotes stride two convolution. Note the input image is at level zero, i.e.  \(\mathbf {x}_0= \mathbf {I}\), while for stage q the low pass result of \(q-1\) is used as input.

Fig. 3
figure 3

Two dimensional analysis convolution matrix sparsity patterns. The first three matrices from the right are individual convolution matrices. The matrix on the very left is the combined sparse convolution matrix

The two dimensional transform is also linear and can therefore be expressed in matrix form:

$$\begin{aligned} \mathbf {A}_{2d}= \dots \begin{pmatrix} \begin{array}{c|c} \mathbf {H}_a &{} \\ \mathbf {H}_h &{} \\ \mathbf {H}_v &{} \\ \mathbf {H}_d &{} \\ \hline &{} \mathbf {I} \\ \end{array} \end{pmatrix} \begin{pmatrix} \mathbf {H}_a &{} \\ \mathbf {H}_h &{} \\ \mathbf {H}_v &{} \\ \mathbf {H}_d &{} \\ \end{pmatrix}. \end{aligned}$$
(5)

Similarly to the single dimensional case \(\mathbf {H}_k\) denotes a stride two, two dimensional convolution matrix. We write the inverse or synthesis operation as \(\mathbf {F}_k\). Figure 3 illustrates the sparsity patterns of the resulting two-dimensional convolution matrices.

4.3 Boundary wavelets

Fig. 4
figure 4

The effect of boundary wavelet treatment. Single-dimensional Transformation-Matrices of shape 32 by 32 are constructed. The plot on the left shows the element-wise absolute values of \(\mathbf {C_s} \cdot \mathbf {C_a}\). The right plot shows the element-wise absolute values of \(\mathbf {S} \cdot \mathbf {A}\) for orthogonalized analysis and synthesis matrices. The identity matrix indicates that our matrices have been correctly assembled

So far, we have described the wavelet transform without considering the finite size of the images. For example, the simple Haar wavelets can be used without modifications in such a case. But, for the transform to preserve all information and be invertible, higher-order wavelets require modifications at the boundary (Strang & Nguyen, 1996). There are different ways to handle the boundary, including zero-padding, symmetrization, periodic extension, and specific filters on the boundary. The disadvantage of zero-padding or periodic extensions is that discontinuities are artificially created at the border. With symmetrization, discontinuities of the first derivative arise at the border (Jensen & la Cour-Harbo, 2001). For large images, the boundary effects might be negligible. However, for the employed multi-scale approach of wavelet-packets, as introduced in the next subsection, the artifacts become too severe. Furthermore, zero-padding increases the number of coefficients, which in our application would need different neural network architectures per wavelet. Therefore we employ special boundary filters in the form of the so-called Gram-Schmidt boundary filters (Jensen & la Cour-Harbo, 2001).

The idea is now to replace the filters at the boundary with specially constructed, shorter filters that preserve both the length and the perfect reconstruction property or other properties of the wavelet transform. We illustrate the impact of the procedure in Fig. 4. The product of the untreated convolution matrices appears on the left. The boundary wavelet matrices \(\mathbf {S} \cdot \mathbf {A}\) on the right. Appendix Figs. 17 and 18 illustrate the sparsity patterns of the resulting sparse analysis and synthesis matrices for the Daubechies two case in 2D.

4.4 Wavelet packets

Presuming to find the essential information in the lower frequencies, note that standard wavelet transformations decompose only the low-pass or a coefficients further. The h, v, and d coefficients are left untouched. While this is often a reasonable assumption (Strang & Nguyen, 1996), previous work (Dzanic et al., 2020; Schwarz et al., 2021) found higher frequencies equally relevant for deepfake detection. For this analysis the wavelet tree will consequently be extended on both the low and high frequency sides. This approach is known as wavelet packet analysis. For a wavelet packet representation, one recursively continues to filter the low- and high-pass results. Each recursion leads to a new level of filter coefficients, starting with an input image \(\mathbf {I} \in \mathbb {R}^{h,w}\), and using \(\mathbf {n}_{0,0} = \mathbf {I}\). A node \(n_{q,j}\) at position j of level q, is convolved with all filters \(\mathbf {f}_k\), \(k \in [a, h, v, d]\):

$$\begin{aligned} \mathbf {n}_{q,j}*\mathbf {f}_k = \mathbf {n}_{q+1,k}. \end{aligned}$$
(6)

Once more, the star \(*\) in the equation above denotes a stride two convolution. Therefore every node at level q will spawn four nodes at the next level \(q+1\). The result at the final level Q, assuming Haar or boundary wavelets without padding, will be a \(4^Q \times \frac{h}{2^Q} \times \frac{w}{2^Q}\) tensor, i.e.  the number of coefficients is the same as before and is denoted by \(Q^\circ\). Thereby wavelet packets provide filtering of the input into progressively finer equal-width blocks, with no redundancy. For excellent presentations of the one-dimensional case we again refer to Strang and Nguyen (1996) and Jensen and la Cour-Harbo (2001).

Fig. 5
figure 5

Visualization of the 2D wavelet packet analysis and synthesis transform. The analysis filters are written as \(\mathbf {H}_k\), synthesis filters as \(\mathbf {F}_k\). We show all first level coefficients as well as some second level coefficients aaahav, ad. The dotted lines indicate the omission of further possible analysis and synthesis steps. The transform is invertible in principle, \(\hat{\mathbf {I}}\) denotes the reconstructed original input

We show a schematic drawing of the process on the left of Fig. 5. The upper half shows the analysis transform, which leads to the coefficients we are after. The lower half shows the synthesis transform, which allows inversion for completeness. Finally, for the correct interpretation of Fig. 1, appendix Fig. 22 lists the exact filter combinations for each packet. The fusion networks in Sect. 5.2 and onward use multiple complete levels at the same time.

5 Classifier design and evaluation

Fig. 6
figure 6

The mean of level 3 Haar wavelet packet coefficient values for each 63k \(128\times 128\) pixel FFHQ (blue) and StyleGAN (orange) images are shown on the left. The shaded area indicates a single standard deviation \(\sigma\). We find higher mean coefficient values for the StyleGAN samples across the board. As the frequency increases from left to right, the differences become more pronounced. The plot on the right side shows validation and test accuracy for the identification experiment. Linear regression networks where used to identify FFHQ and StyleGAN. The blue line shows the pixel and the orange line the training accuracy using \(\ln\)-scaled absolute wavelet packet coefficients. Shaded areas indicate a single standard deviation for different initializations. We find that working with \(\ln\)-scaled absolute packets allowed linear separation of all three images sources. Furthermore, it significantly improves the convergence speed and final result. We found a mean test accuracy of \(99.75 \pm 0.07 \%\) for the packet and for the pixel regression \(83.06 \pm 2.5 \%\)

In Fig. 1 we saw significantly different mean wavelet-coefficients and standard deviations, shown in the rightmost column. The disparity of the absolute mean packet difference widened as the frequency increased along the diagonal. Additionally, background and edge coefficients appeared to diverge. Exclusively for plotting purposes, we, for now, remove the spatial dimensions by averaging over these as well. We observe in the left of Fig. 6 increasing mean differences across the board. Differences are especially pronounced at the higher frequencies on the right. In comparison to the FFHQ standard deviation, the variance produced by the StyleGAN, shown in orange, is smaller for all coefficients. In the following sections, we aim to exploit these differences for the identification of artificial images. We will start with an interpretable linear network and will move on to highly performant nonlinear CNN-architectures.

5.1 Proof of concept: linearly separating FFHQ and style-gan images

Encouraged by the differences, we saw in Figs. 1 and 6, we attempt to linearly separate the \(\ln\)-scaled 3rd level Haar wavelet packets by training an interpretable linear regression model. We aim to obtain a classifier separating FFHQ wavelet packets from StyleGAN-packets. We work with 63k images per class for training, 2k for validation, and 5k for testing. All images have a \(128\times 128\) pixel resolution. The spatial dimensions are left intact. Wavelet coefficients and raw pixels are normalized per color channel using mean \(\mu\) and standard deviation \(\sigma\). We subtract the training-set mean and divide using the standard deviation. On both normalized features, we train identical networks using the Adam optimizer (Kingma & Ba, 2015) with a step size of 0.001 using PyTorch (Paszke et al., 2017).

We plot the mean validation and test accuracy over 5 runs with identical seeds of 0, 1, 2, 3, 4 in Fig. 6 on the right. The shaded areas indicate a single \(\sigma\)-deviation. Networks initialized with the seeds 0, 1, 2, 3, 4 have a mean accuracy of 99.75 ± 0.07 %. In the Haar packet coefficient space, we are able to separate the two sources linearly. Working in the Haar-wavelet space improved both the final result and convergence speed.

Fig. 7
figure 7

Color-encoded matrix visualization of the learned classifier weights for the binary classification problem. We reshape the classifier-matrix into the [2, 64, 16, 16, 3] packet form. Next we average over the three color channels and and arrange the 16 by 16 pixel packets in frequency order. Figure 1 suggested high frequency packets would be crucial for the task of deepfake identification, this figure confirms our initial observation. Input packet labels are available in Fig. 22

We visualize the weights for both classes for the classifier with seed 0 in Fig. 7. The learned classifier confirms our observation from Fig. 1. To identify deep-fakes the classifier does indeed rely heavily on the high-frequency packets. The two images are essentially inverses of each other. High-frequency coefficients stimulate the StyleGAN neuron with a positive impact, while the impact is negative for the FFHQ neuron.

5.2 Large scale detection of fully-fabricated-images

In the previous experiment, two image sources, authentic FFHQ and StyleGAN images, had to be identified. This section will study a standardized problem with more image sources. To allow comparison with previous work, we exactly reproduce the experimental setup from Frank et al. (2020), and Yu et al. (2019). We choose this setup since we, in particular, want to compare to the spatial approach from Yu et al. (2019) and the frequency only DCT-representation presented in Frank et al. (2020). We cite their results for comparison. Additionally, we benchmark against the photoresponse non-uniformity (PRNU) approach, as well as eigenfaces (Sirovich & Kirby, 1987). The experiments in this section use four GANs: CramerGAN (Bellemare et al., 2017), MMDGAN (Binkowski et al., 2018), ProGAN (Karras et al., 2018), and SN-DCGAN (Miyato et al., 2018).

150k images were randomly selected from the Large-scale Celeb Faces Attributes (CelebA) (Liu et al., 2015) and Large-scale Scene UNderstanding (LSUN) bedroom (Yu et al., 2015) data sets. Our pre-processing is identical for both. The real images are cropped and resized to \(128 \times 128\) pixels each. With each of the four GANs an additional 150k images are generated at the same resolution. The 750k total images are split into 500k training, 100k validation, and 150k test images. To ensure stratified sampling, we draw the same amount of samples from each generator or the original data set. As a result, the train, validation, and test sets contain equally many images from each source.

We compute wavelet packets with three levels for each image. We explore the use of Haar and Daubechies wavelets as well as symlets (Daubechies, 1992; Strang & Nguyen, 1996; Jensen & la Cour-Harbo, 2001; Mallat, 2009). The Haar wavelet is also the first Daubechies wavelet. Daubechies-wavelets and symlets are identical up to a degree of 3. Table 2, therefore, starts comparing both families above a degree of 4.

Both raw pixels and wavelet coefficients are normalized for a fair comparison. Normalization is always the last step before the models start their analysis. We normalize by subtracting the training-set color-channel mean and dividing each color channel by its standard deviation. The ln-scaled coefficients are normalized after the rescaling. Given the original images as well as images generated by the four GANs, our classifiers must identify an image as either authentic or point out the generating GAN architecture. We train identical CNN-classifiers on top of pixel and various wavelet representations. Additionally, we evaluate eigenface and PRNU baselines. The wavelet packet transform, regression, and convolutional models are using PyTorch (Paszke et al., 2019). Adam (Kingma & Ba, 2015) optimizes our convolutional-models for 10 epochs. For all experiments the batch size is set to 512 and the learning rate to 0.001.

Table 1 The CNN architectures we used in our experiments

Table 1 lists the exact network architectures trained. The Fourier and pixel architectures use the convolutional network architecture described in Frank et al. (2020). Since the Fourier transform does not change the input dimensions, no architectural changes are required. Consequently, our Fourier and pixel input experiments share the same architecture. The wavelet packet transform, as described in Sect. 4.4, employs sub-sampling operations and multiple filters. The filtered features stack up along the channel dimension. Hence the channel number increases with every level. At the same time, analyzing additional scales cuts the spatial dimensions in half every time. The network in the leftmost column of Table 1 addresses the changes in dimensionality. Its first layer has enough input filters to process all incoming packets. It does not use the fundamental similarities connecting wavelet packets and convolutional neural networks. The fusion architecture on the very right of Table 1 does. Its design allows it to process multiple packet representations alongside its own internal CNN-features. Using an average pooling operation per packet layer leads to CNN features and wavelet packet coefficients with the same spatial dimensions. Both are concatenated along the channel dimension and processed further in the next layer. The concatenation of the wavelet packets requires additional input filters in each convolution layer. We use the number of output filters from the previous layer plus the packet channels.

Table 2 CNN source identification results on the CelebA and LSUN bedroom data sets

Table 2 lists the classification accuracies of the previously described networks and benchmark-methods with various features for comparison. We will first study the effect of third-level wavelet packets in comparison to pixel representation. On CelebA the eigenface approach, in particular, was able to classify 24.11% more images correctly when run on ln-scaled Haar-Wavelet packages instead of pixels. For the more complex CNN, Haar wavelets are not enough. More complex wavelets, however, significantly improve the accuracy. We find accuracy maxima superior to the DCT approach from Frank et al. (2020) for five wavelets, using a comparable CNN. The db3 and db4 packets are better on average, demonstrating the robustness of the wavelet approach. We choose the db4-wavelet representation for the feature fusion experiments. Fusing wavelet packets into a pixel-CNN improves the result while reducing the total number of trainable parameters. On both CelebA and LSUN the Fourier representation emerged as a strong contender. However, we argue that Fourier and wavelet features are compatible and can be used jointly. On both CelebA and LSUN the best performing networks fused Fourier-pixel as well as the first three wavelet packet scales.

Table 3 Confusion matrix for our CNN using db4-wavelet-packets on CelebA

We show the confusion matrix for our best performing network on CelebA in Table 3. Among the GANs we considered the ProGAN-architecture and SN-DCGAN-architecture. Images drawn from both architectures were almost exclusively misclassified as original data and rarely attributed to another GAN. The CramerGAN and MMDGAN generated images were often misattributed to each other, but rarely confused with the original data set. Of all misclassified real CelebA images most were confused with ProGAN images, making it the most advanced network of the four. Overall we find our, in comparison to a GAN, lightweight 109k classifier able to accurately spot 99.45% of all fakes in this case. Note that our convolutional networks have only 109k or 127k parameters, respectively, while Yu et al. (2019) used roughly 9 million parameters and Frank et al. (2020) utilized around 170,000 parameters. We also observed that using our packet representation improved convergence speed.

Fig. 8
figure 8

The left plots depicts the mean ln-db4-wavelet packet plots for CelebA, CramerGAN, MMDGAN, ProGAN and SN-DCGAN, where CelebA is mostly hidden behind the curves for ProGAN and SN-DCGAN. CramerGAN and MMDGAN show similar mean, which relates to their more often pairwise misattribution. The right presents the mean saliency map over all labels for the CNN-\(\ln\)-db4 trained with seed 0

In Fig. 8 (left) we show mean ln-db4-wavelet packet plots, where two GANs show similar behavior to CelebA, while two are different. For interpretation, we use saliency maps (Simonyan et al., 2014) as an exemplary approach. We find that the classifier relies on the edges of the spectrum, Fig. 8 (right).

5.2.1 Training with a fifth of the original data

We study the effect of a drastic reduction of the training data size, by reducing the training set size by 80%.

Table 4 CNN source identification results on the CelebA data set with only 20 % of the original data

Table 4 reports test-accuracies in this case. The classifiers are retrained as described in Sect. 5.2. We find that our Daubechies 4 packet-CNN classifiers are robust to training data reductions, in particular in comparison to the pixel representations. We find that wavelet representations share this property with the Fourier-inputs proposed by Frank et al. (2020).

Fig. 9
figure 9

Mean validation and test set accuracy of 5 runs of source identification on CelebA for a CNN trained on the raw images and \(\ln\)-db4 wavelet packets, each using only 20 % of the original training data. The shaded areas indicate a single standard deviation \(\sigma\)

Figure 9 plots validation accuracy during training as well as the test accuracy at the end. The wavelet packet representation produces a significant initial boost, which persists throughout the training process. This observation is in-line with the training behaviour of the linear regressor as shown in Fig. 6 on the right. In addition to reducing the training set sizes we discuss removing a GAN entirely in the appendix section “Detection of images from an unknown generator”.

5.3 Detecting additional GANs on FFHQ

In this section, we consider a more complex setup on FFHQ taking into account what we learned in the previous section. We extend the setup we studied in Sect. 5.1 by adding StyleGAN2 (Karras et al., 2020) and StyleGAN3 (Karras et al., 2021) generated images. Unlike Sect. 5.2, this setup has not been explored in previous work. We consider it here to work with some of the most recent architectures. FFHQ-pre-trained networks are available for all three GANs. Here, we re-evaluate the most important models from Sect. 5.2 in the FFHQ setting.

Table 5 Classification accuracy for the FFHQ , StyleGAN Karras et al. (2018), StyleGAN2 Karras et al. (2020) and StyleGAN3 Karras et al. (2021) separation problem

We re-use the training hyperparameters as described in Sect. 5.2. Results appear in Table 5, in comparison to Table 2 we observe a surprising performance drop for the Fourier features. For the Daubechies-four wavelet-packets we observe consistent performance gains, both in the regression and convolutional setting. The fusing pixels and three wavelet packet levels performed best. Fusing Fourier, as well as wavelet packet features, does not help in this case.

Fig. 10
figure 10

The left figure shows db4 mean wavelets for FFHQ, StyleGAN (Karras et al., 2018), StyleGAN2 (Karras et al., 2020) and StyleGAN3 (Karras et al., 2021) images. Standard deviations are included in appendix plot 19. The right side shows a saliency map (Simonyan et al., 2014) for the ln-db4-CNN-classifier from Table 5. All wavelet filter combinations are labeled in appendix Fig. 22

As before, we investigate mean ln-db4-wavelet packet plots and saliency maps. The mean wavelet coefficients on the left of Fig. 10 reveal an improved ability to accurately model the spectrum for the StyleGAN2 and StyleGAN2 architectures, yet differences to FFHQ remain. Appendix Table 9 confirms this observation, the two newer GANs are misclassified more often. The per packet mean and standard deviation for StyleGAN2 and StyleGAN3 is almost identical. In difference to Fig. 8, the saliency score indicates a more wide-spread importance of packets. We see a peak at the very high-frequency packets and generally a higher saliency at higher frequencies.

5.4 Detecting partial manipulations in Face-Forensics++

In all previously studied images, all pixels were either original or fake. This section studies the identification of partial manipulations. To this end, we consider a subset of the ff++ video-deepfake detection benchmark (Rossler et al., 2019). The full data set contains original videos as well as manipulated videos. Altered clips are deepfaked (Deepfake-Faceswap-Contributors, 2018), include neural textures (Thies et al., 2019), or have been modified using the more traditional face-swapping (Kowalski, 2016) or face-to-face (Thies et al., 2016) translation methods. The first two methods are deep learning-based.

Data preparation and frame extraction follows the standardized setup from Rossler et al. (2019). Seven hundred twenty videos are used for training and 140 each for validation and testing. We sample 260Footnote 1 images from each training video and 100 from each test and validation clip. A fake or real binary label is assigned to each frame. After re-scaling each frame to [0, 1], data preprocessing and network training follows the description from Sect. 5.2. We trained all networks for 20 epochs.

Table 6 Detection results on the neural-subset of the face-forensics++ (Rossler et al., 2019)

We first consider the neural subset of pristine images, deepfakes and images containing neural textures. Results are tabulated in Table 6. We find that our classifiers outperform the pixel-CNN baseline for both db4 and sym4 wavelets. Note, the ff++-data set only modifies the facial region of each image. A possible explanation could be that the first and second scales produce coarser frequency resolutions that do not help in this specific case.

According to appendix Figs. 20 and 21 the deep learning based methods create frequency domain artifacts similar to those we saw in Figs. 6 and 7. Therefore, we again see an improved performance for the packet-based classifiers in Table 6.

Appendix Table 10 lists results on the complete data set for three compression-levels. The wavelet-based networks are competitive if compared to approaches without Image-Net pretraining. We highlight the high-quality setting C23 in particular. However, the full data set includes classic computer-vision methods that do not rely on deep learning. These methods produce fewer divergences in the higher wavelet packet coefficients, making these harder to detect. The section “Results on the full ff++ with and without image corruption” discusses Table 10 in the appendix.

5.5 Guided diffusion on LSUN

Fig. 11
figure 11

A logarithmically scaled Haar-wavelet packet representation is shown on the left. The mean packets of 40k LSUN-bedroom images are shown in blue. The orange plot shows the mean packets of 40k images generated by guided diffusion. In both cases we the image resolution was 256 by 256 pixels. We see that guided diffusion approximates the frequency-spectrum well. However, a small systematic increase is visible for higher frequencies. On the right we show the CNN-saliency for the CNN classifier trained on top of the high resolution packet representation. Similarly to the GAN classifier it appears to focus on the highest frequency packet

To evaluate our detection method on non-GAN-based deep learning generators, we consider the recent guided diffusion approach. We study images generated using the supplementary code written by Dhariwal and Nichol (2021)Footnote 2 on the LSUN-bedrooms data set.

Table 7 Classification accuracy using various feature representations on the LSUN and Guided-Diffusion seperation problem

We start by considering 40k images per class at a 128x128 pixel resolution. We use 38k images for training and set 2k aside. We split the remaining 2k images equally and work with 1k images for validation and 1k for testing. For our analysis, we work with the wavelet packet representation as described in Sect. 5.1. Additionally, we test a pixel and Fourier representation. Table 7 shows that the wavelet packet approach works comparatively well in this setting.

We investigate further and study an additional 40k images per class at a higher 256x256-Pixel resolution. We visualize the third-level Haar-wavelet packets in Figure 11 on the left. The coefficients for the images generated by guided diffusion exhibit a slightly elevated mean and a reduced standard deviation. Using stacked wavelet packets with the spatial dimension intact, we train a Haar-CNN as described in Section 5.2. To work at 256 by 256 pixels, we set the size of the final fully connected layer for the Wavelet-Packet architecture from Table 1 to \(24 \cdot 17 \cdot 17, 2\). For five experiments, we observe a test accuracy of \(99.04 \pm 0.6\). We conclude that the wavelet-packet approach reliably identifies images generated by guided diffusion. We did not find higher-order wavelets beneficial for the guided-diffusion data and leave the investigation of possible causes for future work.

6 Conclusion and outlook

We introduced a wavelet packet-based approach for deepfake analysis and detection, which is based on a multi-scale image representation in space and frequency. We saw that wavelet-packets allow the visualization of frequency domain differences while preserving some spatial information. We found diverging mean values for packets at high frequencies. At the same time, the bulk of the standard deviation differences were at the edges and within the background portions of the images. This observation suggests contemporary GAN architectures still fail to capture the backgrounds and high-frequency information in full detail.

We found that coupling higher-order wavelets and CNN led to an improved or competitive performance in comparison to a DCT approach or working directly on the raw images, where fused architectures shows the best performance. Furthermore, the employed lean neural network architecture allows efficient training and also can achieve similar accuracy with only 20% of the training data. Overall our classifiers deliver state-of-the-art performance, require few learnable parameters, and converge quickly.

Even though releasing our detection code will allow potential bad actors to test against it, we hope our approach will complement the publicly available deepfake identification toolkits. We propose to further investigate the resilience of multi-classifier ensembles in future work and envision a framework of multiple forensic classifiers, which together give strong and robust results for artificial image identification. Future work could also explore the use of curvelets and shearlets for deepfake detection. Integrating image data from diffusion models and GANs into a single standardized data set will also be an important task in future work.