Since Joseph Fourier (1768–1830, see Bracewell, 2000) first studied heat flow in the Earth using his contentious idea that an arbitrary function could be represented by a single analytic function the provenance of harmonic analysis to analyse a time series has become established.
The representation of a time series as a sequence of numbers can be analysed to identify dominant harmonics or periodicities, trends and even used to predict futures using history; aims which have had long development. Fourier series uses summed cosinusoids to create an equivalent representation to the time sequence of numbers but in a second domain (frequency as opposed to time); here the time sequence becomes a spectrum of harmonic components (in proportion to, or weighted by, the modulus of the sinusoidal coefficients, with phase differences between the harmonics)—hence we have the Fourier series and transform. We also thus arrive at the generation of Fourier transform pairs, when a function which can be represented analytically in time has Fourier transform which is an analytical function in frequency. Well known examples are: Gaussian function transforming to another Gaussian, sinc function transforming to unit rectangle function and sinc2 function transforming to triangle function; all useful when visualizing deconvolution and filter operations and Bracewell (1965) provided a pictorial dictionary of such pairs “for inspiration”. An analytical function or time sequence is required to extend from − ∞ to + ∞ which in practice for physical time sequence considerations leads to discretised time sequences, sampling intervals and bandwidth considerations.
Achieving Fourier transform or Fourier series representation of a time sequence of numbers was arithmetically tedious; see Schuster’s (1897) analysis related to earthquake occurrences linked to possible Earth–Moon–Sun interactions to recognise such tedium and arithmetic labour, until the Cooley–Tukey (1965) algorithm facilitated rapid machine calculation (this algorithm born of the need to analyse the glut of seismograms required for nuclear explosion test’s monitoring). The advent of the Cooley–Tukey algorithm allowed the rapid computation and transition between the domains of time and frequency needed to achieve discretised Fourier pairs and coefficients. With the knowledge that an entirely real physical time signal could be represented as a complex variable ushered in the ability to obtain the spectral content of the “wiggles”, as later referred to by Grossman and Morlet (1984), to carry out deconvolution, and to measure physical properties of materials in the Earth. As one example from many, with these tools, a seismogram could be analysed with a suitable set of band-pass filters in the frequency domain as one step, followed by reverse Fourier back to time for each filter in the set to identify and localise arrival times when energy arrived at each specific filter central frequency—this being pinpointed as the maximum of the time domain analytic signal—hence obtaining group velocities for a wave propagating in space (e.g. Burton & Blamey, 1972, is one of several). These methods, alongside the ability to compute spectral amplitudes, were next used to measure the anelastic attenuation factor or quality factor, Q, within the Earth (Burton, 1974). There are many similar examples which embrace the ability to analyse wave and wave fronts advancing through the Earth from a source, and the physical properties encountered en route. However, the earthquake sources themselves and the time sequence constructed from a regional history of earthquakes describe a point process (of earthquake occurrence dates in time) rather than the passage in time of a wave front at a point in space. The question arises, does the time sequence of earthquake history contain harmonics, periodicities, dictated by some underlying process, which might be known or not known? Schuster, long ago in 1897, addressed this issue by laborious arithmetical calculation of sinusoidal coefficients of Fourier series representation of regional earthquake histories, inspecting for periodicities which might, or might not, be linked to lunar influence, and, vitally, introduced probability calculation to test if amplitude of any periodicity was significantly different to that expected from a purely random process of earthquake occurrence in time. This approach has been exploited for over a century. A modified Schuster approach has recently been applied by Ader and Avouac (2013) to Nepalese seismicity which spans a significant extent of the Himalayas that is of interest to us. They discern a 40% increase of seismicity (intermediate magnitudes) in winter but no periodicities that might be linked to tidal variation. However, strategies have existed for some time to move away from such approaches, and away from pure sinusoidal Fourier series, to those advocated by Grossman and Morlet (1984).
Whereas summed sinusoids are intuitive as representation of harmonics in a propagating wave (and frequency or harmonic content can be manipulated to be inspected in the time domain), they are not intuitive as representation of periodicity in time sequenced earthquake history. Alternative windowing techniques can simultaneously evaluate the spectral periodic content of a point process in time and its variability through time; the methods of Grossman and Morlet (1984) have advantages in that they achieve this two-pronged target. What follows is guided by the practical advice for implementation given by Torrence and Campo (1998).
The Morlet wavelet is one which contains a complex exponential carrier multiplied by a Gaussian window. It was suggested by Jean Morlet in seismological application, who cooperated with Grossman to give a system and base for this wavelet transform (Grossman & Morlet, 1984). So, the Morlet wavelet function is defined as
$$ \psi (t) = \pi^{ - 1/4} e^{{i\omega_{0} t}} e^{{ - t{}^{2}/2}} $$
(3)
where ω0 is the nondimensional frequency and t is time (Torrence and Campo, 1998). A wavelet transform could be built using alternative windows; the one in (3) being built using a Gaussian is the Morlet wavelet. There is a condition to be a wavelet which is that the mean of the window is zero. For the Morlet Gaussian window to meet the admissibility condition then ω0 = 6 and is localised in both time and frequency domains (Farge, 1992). This Morlet wavelet can then be used to examine nonstationary power over a range of frequencies with a Gaussian assisting localisation in both domains. (The Gaussian function Fourier pair is also a Gaussian function as graphically illustrated by Bracewell, 1965, 2000.) A regional earthquake history is a discrete sequence in time. The discrete wavelet transform formula is
$$ W_{f} (a,b) = |a|^{{ - \tfrac{1}{2}}} \sum\limits_{i = 1}^{N} {f(i\delta t)\psi *\left( {\frac{i\delta t - b}{a}} \right)} $$
(4)
and ψ* indicates complex conjugate. a is scale factor which is related to period T and frequency ω. b generates translation related to time location. i is the data sequence time position label. f(iδt) is the digitised equivalent to the variable time series f(t). The δt is the variable sequence time interval. Wf (a, b) is a wavelet coefficient. For many wavelets the scale factor a is dissimilar to values obtained using Fourier transform for period T and frequency ω. For the Morlet wavelet with ω0 = 6, and with 4πa/[ω0 + (2 + ω02)0.5] for T gives a value of 1.033a; indicating that for the Morlet wavelet, the wavelet scale is almost equal to the Fourier period, differing by circa 3% (Torrence and Campo, 1998).
The wavelet power spectrum is
$$ E_{a,b} = |w_{f} (a,b)|^{2} $$
(5)
and the overall wavelet power spectrum that characterizes the corresponding energy density at different scales is given by
$$ E_{a} = \frac{1}{N}\sum\limits_{b = 1}^{N} {|w_{f} } (a,b)|^{2} $$
(6)
The search for significant periodicities in earthquake histories by inspecting Fourier sinusoidal coefficients becomes rigorous when specific coefficients are demonstrated to be significantly different to those arising by chance in a random process (Ader and Avouac, 2013; CalTech, 2012; Schuster, 1897). Similarly, the statistical test for the Morlet spectrum is important. The Morlet power spectrum can be compared with a reference noise spectrum. Red and white noise are usually considered standard for such tests, typically red noise is used to determine if a Morlet wavelet power spectrum contains harmonics that are significantly different to those expected due to noise alone (Torrence and Campo, 1998; Yin et al., 2012). If \(p_{a}\) is the spectrum for the red noise, then it is given by
$$ p_{a} = \frac{{1 - \alpha^{2} }}{{1 + \alpha^{2} - 2\alpha \cos \left( {\frac{2\pi \delta t}{{1.033\alpha }}} \right)}} $$
(7)
where α is the assumed lag − 1 autocorrelation and δt is the data sequence time interval. The overall wavelet power spectrum characterizes the corresponding energy density at different scales a. The theoretical red power spectrum is given by
$$p=\frac{{\sigma }^{2}{p}_{a}{\chi }_{\nu }^{2}}{\nu }$$
(8)
where \(\chi_{\upsilon }^{2}\) is the value for distribution \(\chi^{2}\) with υ degrees of freedom at the significance level 0.05 and σ2 is the variance of the original data sequence. Any periodicity in the overall wavelet power spectrum is significant when \(E_{a} > p\). The significance testing associated with the Morlet spectra that follow use graphs to display both Ea and p for all periodicities inspected.