Abstract
Waves and oscillations have been observed in the Sun’s atmosphere for over half a century. While such phenomena have readily been observed across the entire electromagnetic spectrum, spanning radio to gamma-ray sources, the underlying role of waves in the supply of energy to the outermost extremities of the Sun’s corona has yet to be uncovered. Of particular interest is the lower solar atmosphere, including the photosphere and chromosphere, since these regions harbor the footpoints of powerful magnetic flux bundles that are able to guide oscillatory motion upwards from the solar surface. As a result, many of the current- and next-generation ground-based and space-borne observing facilities are focusing their attention on these tenuous layers of the lower solar atmosphere in an attempt to study, at the highest spatial and temporal scales possible, the mechanisms responsible for the generation, propagation, and ultimate dissipation of energetic wave phenomena. Here, we present a two-fold review that is designed to overview both the wave analyses techniques the solar physics community currently have at their disposal, as well as highlight scientific advancements made over the last decade. Importantly, while many ground-breaking studies will address and answer key problems in solar physics, the cutting-edge nature of their investigations will naturally pose yet more outstanding observational and/or theoretical questions that require subsequent follow-up work. This is not only to be expected, but should be embraced as a reminder of the era of rapid discovery we currently find ourselves in. We will highlight these open questions and suggest ways in which the solar physics community can address these in the years and decades to come.
1 Introduction
Understanding the energy flow through the Sun’s dynamic and tenuous atmosphere has long been a scientific interest for the global astrophysical community. The challenge of identifying the source(s) responsible for the elevated multi-million Kelvin temperatures in the solar corona has produced two main theoretical mechanisms. The first is via magnetic reconnection—the so-called ‘DC’ heating mechanism. Here, the continual re-configuration of the omnipresent magnetic fields that populate the Sun’s atmosphere allow the production of intense thermal heating as the magnetic energy is converted through the process of reconnection, producing dramatic flares that often release energies in excess of \(10^{31}\) ergs during a single event (Priest 1986; Priest and Schrijver 1999; Shibata and Magara 2011; Benz 2017). However, such large-scale solar flares are relatively rare, and hence cannot supply the global background heating required to continuously maintain the corona’s elevated temperatures. Instead, there is evidence to suggest that the frequency of flaring events, as a function of their energy, is governed by a power-law relationship (Shimizu and Tsuneta 1997; Krucker and Benz 1998; Aschwanden et al. 2000; Parnell and Jupp 2000), whereby smaller-scale micro- and nano-flares (with energies \(\sim 10^{27}\) ergs and \(\sim 10^{24}\) ergs, respectively) may occur with such regularity that they can sustain the thermal inputs required to maintain the hot corona. Many modern numerical and observational studies have been undertaken to try and quantify the ubiquity of these faint reconnection events, which often lie at (or below) the noise level of current-generation facilities (Terzo et al. 2011). Due to the difficulties surrounding the extraction of nanoflare characteristics embedded within the noise limitations of the data, only tentative evidence exists to support their global heating abilities of the outer solar atmosphere (Viall and Klimchuk 2013, 2015, 2016, 2017; Jess et al. 2014, 2019; Bradshaw and Klimchuk 2015; Tajfirouze et al. 2016a, b; Ishikawa et al. 2017, to name but a few recent examples).
The second energy-supplying mechanism for the Sun’s outer atmosphere involves the creation, propagation, and ultimately dissipation of wave-related phenomena—often referred to as the ‘AC’ heating mechanism (Schwarzschild 1948). The specific oscillatory processes responsible for supplying non-thermal energy to the solar atmosphere have come under scrutiny since wave motions were first discovered more than 60 years ago (Leighton 1960; Leighton et al. 1962; Noyes and Leighton 1963a). Of course, such early observations were without the modern technological improvements that enhance image quality, such as adaptive optics (AO; Rimmele and Marino 2011) and image reconstruction techniques, including speckle (Wöger et al. 2008) and multi-object multi-frame blind deconvolution (MOMFBD; van Noort et al. 2005). As a result, many pioneering papers documenting the characteristics of wave phenomena in the lower solar atmosphere relied upon the study of large-scale features that would be less effected by seeing-induced fluctuations, including sunspots and super-granular cells, captured using premiere telescope facilities of the time such as the McMath-Pierce Solar Telescope (Pierce 1964) at the Kitt Peak Solar Observatory, USA, and the National Science Foundation’s Dunn Solar Telescope (DST; Dunn 1969), situated in the Sacramento Peak mountains of New Mexico, USA (see Fig. 1).
Images depicting the construction of National Science Foundation facilities some 50 years apart. Panels b, d, e display construction stages of the Dunn Solar Telescope, which was first commissioned in 1969 in the Sacramento Peak mountains of New Mexico, USA. Panels a, c, f depict similar stages of construction for the Daniel K. Inouye Solar Telescope, which acquired first-light observations in 2019 at the Haleakal\(\bar{\text{a}}\) Observatory on the Hawaiian island of Maui, USA. Images courtesy of Doug Gilliam (NSO) and Brett Simison (NSO)
Even at large spatial scales, Doppler velocity and intensity time series from optical spectral lines, including Fe i (Deubner 1967), H\(\alpha \) (Deubner 1969), Ca ii (Musman and Rust 1970), C i (Deubner 1971), and Na i (Slaughter and Wilson 1972) demonstrated the ubiquitous nature of oscillations throughout the photosphere and chromosphere. Through segregation of slowly-varying flows and periodic velocity fluctuations, Sheeley and Bhatnagar (1971) were able to map the spatial structuring of wave power in the vicinity of a sunspot (see Fig. 2), and found clear evidence for ubiquitous photospheric oscillatory motion with periods \(\sim 300\) s and velocity amplitudes \(\sim 0.6\) km s\(^{-1}\). Such periodicities and amplitudes were deemed observational manifestations of the pressure-modulated global p-mode spectrum of the Sun (Ulrich 1970; Leibacher and Stein 1971; Deubner 1975; Rhodes et al. 1977), where internal acoustic waves are allowed to leak upwards from the solar surface, hence producing the intensity and velocity oscillations synonymous with the compressions and rarefactions of acoustic waves.

Image reproduced with permission from Sheeley and Bhatnagar (1971), copyright by Springer
Observations of the photospheric Fe i absorption line, showing the sum of blue- and red-wing intensities (displayed in a negative color scale; top), the total measured Doppler velocities across the field-of-view (middle-top), the slowly varying component of the plasma flows (middle-bottom), and the Doppler velocity map arising purely from oscillatory motion (bottom). The region of interest includes a large sunspot structure (left-hand side), and shows ubiquitous oscillatory signatures with periods \(\sim 300\) s and velocity amplitudes \(\sim 0.6\) km s\(^{-1}\).
Difficulties arose in subsequent work, when the measured phase velocities of the waves between two atmospheric heights were too large to remain consistent with a purely acoustic wave interpretation (Osterbrock 1961; Mein and Mein 1976). It was not yet realized that the 5-min oscillations are not propagating acoustic waves, but instead are evanescent in character since their frequency was lower than the associated acoustic cut-off value (see Sect. 3.1 for further details). Researchers further hypothesized that the magnetic fields, which were often synonymous with the observed oscillations, needed to be considered in order to accurately understand and model the wave dynamics (Michalitsanos 1973; Nakagawa 1973; Nakagawa et al. 1973; Stein and Leibacher 1974; Mein 1977, 1978, to name but a few examples). The field of magnetohydrodynamics (MHD) was introduced to effectively link the observed wave signatures to the underlying magnetic configurations, where the strong field strengths experienced in certain locations (e.g., field strengths that can approach approximately 6000 G in sunspot umbrae; Livingston et al. 2006; Okamoto and Sakurai 2018) produce wave modes that are highly modified from their purely acoustic counterparts.
The importance of the magnetic field in the studies of wave phenomena cannot be overestimated, since both the alignment of the embedded magnetic field, \(B_0\), with the wavevector, k, and the ratio of the kinetic pressure, \(p_0\), to the magnetic pressure, \(B_{0}^{2}/2\mu _{0}\), play influential roles in the characteristics of any waves present (see the reviews by, e.g., Stein and Leibacher 1974; Bogdan 2000; Mathioudakis et al. 2013; Jess et al. 2015; Jess and Verth 2016). Commonly, the ratio of kinetic to magnetic pressures is referred to as the plasma-\(\beta \), defined as,
where \(\mu _{0}\) is the magnetic permeability of free space (Wentzel 1979; Edwin and Roberts 1983; Spruit and Roberts 1983). Crucially, by introducing the local hydrogen number density, \(n_{\text{H}}\), the plasma-\(\beta \) can be rewritten (in cgs units) in terms of the Boltzmann constant, \(k_{B}\), and the temperature of the plasma, T, giving the relation,
In the lower regions of the solar atmosphere, including the photosphere and chromosphere, temperatures are relatively low (\(T \lesssim 15,000\,\text{K}\)) when compared to the corona. This, combined with structures synonymous with the solar surface, including sunspots, pores, and magnetic bright points (MBPs; Berger et al. 1995; Sánchez Almeida et al. 2004; Ishikawa et al. 2007; Utz et al. 2009, 2010, 2013a, b; Keys et al. 2011, 2013, 2014), all of which possess strong magnetic field concentrations (\(B_{0} \gtrsim 1000\,\text{G}\)), presents wave conduits that are inherently ‘low-\(\beta \)’ (i.e., are dominated by magnetic pressure; \(\beta \ll 1\)). Gary (2001) has indicated how such structures (particularly for highly-magnetic sunspots) can maintain their low-\(\beta \) status throughout the entire solar atmosphere, even as the magnetic fields begin to expand into the more volume-filling chromosphere (Gudiksen 2006; Beck et al. 2013b). Using non-linear force-free field (NLFFF; Wiegelmann 2008; Aschwanden 2016; Wiegelmann and Sakurai 2021) extrapolations, Aschwanden et al. (2016) and Grant et al. (2018) provided further evidence that sunspots can be best categorized as low-\(\beta \) wave guides, spanning from the photosphere through to the outermost extremities of the corona. As can be seen from Eq. (2), the hydrogen number density (\(n_{\text{H}}\)) also plays a pivotal role in the precise local value of the plasma-\(\beta \). As one moves higher in the solar atmosphere, a significant drop in the hydrogen number density is experienced (see, e.g., the sunspot model proposed by Avrett 1981), often with an associated scale-height on the order of 150–200 km (Alissandrakis 2020). As a result, the interplay between the number density and the expanding magnetic fields plays an important role in whether the environment is dominated by magnetic or plasma pressures.
Of course, not all regions of the Sun’s lower atmosphere are quite so straightforward. Weaker magnetic elements, including small-scale MBPs (Keys et al. 2020), are not able to sustain dominant magnetic pressures as their fields expand with atmospheric height. This results in the transition to a ‘high-\(\beta \)’ environment, where the plasma pressure dominates over the magnetic pressure (i.e., \(\beta > 1\)), which has been observed and modeled under a variety of highly magnetic conditions (e.g., Borrero and Ichimoto 2011; Jess et al. 2013; Bourdin 2017; Grant et al. 2018). This transition has important implications for the embedded waves, since the allowable modes become effected as the wave guide passes through the \(\beta \sim 1\) equipartition layer. Here, waves are able to undergo mode conversion/transmission (Schunker and Cally 2006; Cally 2007; Hansen et al. 2016), which has the ability to change the properties and observable signatures of the oscillations. However, we note that under purely quiescent conditions (i.e., related to quiet Sun modeling and observations), the associated intergranular lanes (Lin and Rimmele 1999) and granules themselves (Lites et al. 2008) will already be within the high plasma-\(\beta \) regime at photospheric heights.
Since the turn of the century, there has been a number of reviews published in the field of MHD waves manifesting in the outer solar atmosphere, including those linked to standing (van Doorsselaere et al. 2009; Wang 2011), quasi-periodic (Nakariakov et al. 2005), and propagating (de Moortel 2009; Zaqarashvili and Erdélyi 2009; Lin 2011) oscillations. Many of these review articles focus on the outermost regions of the solar atmosphere (i.e., the corona), or only address waves and oscillations isolated within a specific layer of the Sun’s atmosphere, e.g., the photosphere (Jess and Verth 2016) or the chromosphere (Jess et al. 2015; Verth and Jess 2016). As such, previous reviews have not focused on the coupling of MHD wave activity between the photosphere and chromosphere, which has only recently become possible due to the advancements made in multi-wavelength observations and data-driven MHD simulations. Here, in this review, we examine the current state-of-the-art in wave propagation, coupling, and damping/dissipation within the lower solar atmosphere, which comprises of both the photosphere and chromosphere, which are the focal points of next-generation ground-based telescopes, such as DKIST.
In addition, we would also like this review to be useful for early career researchers (PhD students and post-doctoral staff) who may not necessarily be familiar with all of the wave-based analysis techniques the solar physics community currently have at their disposal, let alone the wave-related literature currently in the published domain. As a result, we wish this review to deviate from traditional texts that focus on summarizing and potential follow-up interpretations of research findings. Instead, we will present traditional and state-of-the-art methods for detecting, isolating, and quantifying wave activity in the solar atmosphere. This is particularly important since modern data sequences acquired at cutting-edge observatories are providing us with incredible spatial, spectral, and temporal resolutions that require efficient and robust analyses tools in order to maximize the scientific return. Furthermore, we will highlight how the specific analysis methods employed often strongly influence the scientific results obtained, hence it is important to ensure that the techniques applied are fit for purpose. To demonstrate the observational improvements made over the last \(\sim 50\) years we draw the readers attention to Figs. 2 and 3. Both Figs. 2 and 3 show sunspot structures captured using the best techniques available at that time. However, with advancements made in imaging (adaptive) optics, camera architectures, and post-processing algorithms, the drastic improvements are clear to see, with the high-quality data sequences shown in Fig. 3 highlighting the incredible observations of the Sun’s lower atmosphere we currently have at our disposal.
Observations of a sunspot (top row) and a quiet-Sun region (middle row) in the lower solar atmosphere, sampled at three wavelength positions in the Ca ii 8542 Å spectral line from the 1 m Swedish Solar Telescope (SST). The wavelength positions, from left to right, correspond to \(-900\) mÅ, \(-300\) mÅ, and 0 mÅ from the line core, marked with vertical dashed lines in the bottom-right panel, where the average spectral line and all sampled positions are also depicted. The bottom-left panel illustrates a photospheric image sampled with a broadband filter (centered at 3950 Å; filter width \(\approx 13.2\) Å). For better visibility, a small portion of the observed images are presented. All images are squared. Images courtesy of the Rosseland Centre for Solar Physics, University of Oslo
After the wave detection and analysis techniques have been identified, with their strengths/weaknesses defined, we will then take the opportunity to summarize recent theoretical and observational research focused on the generation, propagation, coupling, and dissipation of wave activity spanning the base of the photosphere, through to the upper echelons of the chromosphere that couples into the transition region and corona above. Naturally, addressing a key question in the research domain may subsequently pose two or three more, or pushing the boundaries of observational techniques and/or theoretical modeling tools may lead to ambiguities or caveats in the subsequent interpretations. This is not only to be expected, but should be embraced as a reminder of the era of rapid discovery we currently find ourselves in. The open questions we will pose not only highlight the challenges currently seeking solution with the dawn of next-generation ground-based and space-borne telescopes, but will also set the scene for research projects spanning decades to come.
2 Wave analysis tools
Identifying, extracting, quantifying, and understanding wave-related phenomena in astrophysical time series is a challenging endeavor. Signals that are captured by even the most modern charge-coupled devices (CCDs) and scientific complementary metal-oxide-semiconductor (sCMOS) detectors are accompanied by an assortment of instrumental and noise signals that act to mask the underlying periodic signatures. For example, the particle nature of the incident photons leads to Poisson-based shot noise, resulting in randomized intensity fluctuations about the time series mean (Terrell 1977; Delouille et al. 2008), which can reduce the clarity of wave-based signatures. Furthermore, instrumental and telescope effects, including temperature sensitivity and pointing stability, can lead to mixed signals either swamping the signatures of wave motion, or artificially creating false periodicities in the resulting data products. Hence, without large wave amplitudes it becomes a challenge to accurately constrain weak wave signals in even the most modern observational time series, especially once the wave fluctuations become comparable to the noise limitations of the data sequence. In the following sub-sections we will document an assortment of commonly available tools available to the solar physics community that can help quantify wave motion embedded in observational data.
2.1 Observations
In order for meaningful comparisons to be made from the techniques presented in Sect. 2, we will benchmark their suitability using two observed time series. We would like to highlight that the algorithms described and demonstrated below can be applied to any form of observational data product, including intensities, Doppler velocities, and spectral line-widths. As such, it is important to ensure that the input time series are scientifically calibrated before these wave analysis techniques are applied.
2.1.1 HARDcam: 2011 December 10
The Hydrogen-alpha Rapid Dynamics camera (HARDcam; Jess et al. 2012a) is an sCMOS instrument designed to acquire high-cadence H\(\alpha \) images at the DST facility. The data captured by HARDcam on 2011 December 10 consists of 75 min (16:10–17:25 UT) of H\(\alpha \) images, acquired through a narrowband 0.25 Å Zeiss filter, obtained at 20 frames per second. Active region NOAA 11366 was chosen as the target, which was located at heliocentric coordinates (\(356''\), \(305''\)), or N17.9W22.5 in the more conventional heliographic coordinate system. A non-diffraction-limited imaging platescale of \(0{\,}.{\!\!}{''}138\) per pixel was chosen to provide a field-of-view size equal to \(71''\times 71''\). During the observing sequence, high-order adaptive optics (Rimmele 2004; Rimmele and Marino 2011) and speckle reconstruction algorithms (Wöger et al. 2008) were employed, providing a final cadence for the reconstructed images of 1.78 s. The dataset has previously been utilized in a host of scientific studies (Jess et al. 2013, 2016, 2017; Krishna Prasad et al. 2015; Albidah et al. 2021) due to the excellent seeing conditions experienced and the fact that the sunspot observed was highly circularly symmetric in its shape. A sample image from this observing campaign is shown in the right panel of Fig. 4, alongside a simultaneous continuum image captured by the Helioseismic and Magnetic Imager (HMI; Schou et al. 2012), onboard the Solar Dynamics Observatory (SDO; Pesnell et al. 2012).
An SDO/HMI full-disk continuum image (left), with a red box highlighting the HARDcam field-of-view captured by the DST facility on 2011 December 10. An H\(\alpha \) line core image of active region NOAA 11366, acquired by HARDcam at 16:10 UT, is displayed in the right panel. Axes represent heliocentric coordinates in arcseconds
In addition to the HARDcam data of this active region, we also accessed data from the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) onboard the SDO. Here, we obtained 1700 Å continuum (photospheric) images with a cadence of 24 s and spanning a 2.5 h duration. The imaging platescale is \(0{\,}.{\!\!}{''}6\) per pixel, with a \(350\times 350\) pixel\(^{2}\) cut-out providing a \(210''\times 210''\) field-of-view centered on the NOAA 11366 sunspot. The SDO/AIA images are used purely for the purposes of comparison to HARDcam information in Sect. 2.3.1.
2.1.2 SuFI: 2009 June 9
The Sunrise Filter Imager (SuFI; Gandorfer et al. 2011) onboard the Sunrise balloon-borne solar observatory (Solanki et al. 2010; Barthol et al. 2011; Berkefeld et al. 2011) sampled multiple photospheric and chromospheric heights, with a 1 m telescope, in distinct wavelength bands during its first and second flights in 2009 and 2013, respectively (Solanki et al. 2017). High quality, seeing-free time-series of images at 300 nm and 397 nm (Ca ii H) bands (approximately corresponding to the low photosphere and low chromosphere, respectively) were acquired by SuFI/Sunrise on 2009 June 9, between 01:32 UTC and 02:00 UTC, at a cadence of 12 sec after phase-diversity reconstructions (Hirzberger et al. 2010, 2011). The observations sampled a quiet region located at solar disk center with a field of view of \(14''\times 40''\) and a spatial sampling of \(0{\,}.{\!\!}{''}02\) per pixel. Figure 5 illustrates sub-field-of-view sample images in both bands (with an average height difference of \(\approx 450\) km; Jafarzadeh et al. 2017d), along with magnetic-field strength map obtained from Stokes inversions of the Fe i 525.02 nm spectral line from the Sunrise Imaging Magnetograph eXperiment (IMaX; Martínez Pillet et al. 2011). A small magnetic bright point is also marked on all panels of Fig. 5 with a circle. Wave propagation between these two atmospheric layers in the small magnetic element is discussed in Sect. 2.4.1.
A small region of an image acquired in 300 nm (left) and in Ca ii H spectral lines (middle) from SuFI/Sunrise, along with their corresponding line-of-sight magnetic fields from IMaX/Sunrise (right). The latter ranges between \(-1654\) and 2194 G. The circle includes a small-scale magnetic feature whose oscillatory behavior is shown in Fig. 25
2.2 One-dimensional Fourier analysis
Traditionally, Fourier analysis (Fourier 1824) is used to decompose time series into a set of cosines and sines of varying amplitudes and phases in order to recreate the input lightcurve. Importantly, for Fourier analysis to accurately benchmark embedded wave motion, the input time series must be comprised of both linear and stationary signals. Here, a purely linear signal can be characterized by Gaussian behavior (i.e., fluctuations that obey a Gaussian distribution in the limit of large number statistics), while a stationary signal has a constant mean value and a variance that is independent of time (Tunnicliffe-Wilson 1989; Cheng et al. 2015). If non-linear signals are present, then the time series displays non-Gaussian behavior (Jess et al. 2019), i.e., it contains features that cannot be modeled by linear processes, including time-changing variances, asymmetric cycles, higher-moment structures, etc. In terms of wave studies, these features often manifest in solar observations in the form of sawtooth-shaped structures in time series synonymous with developing shock waves (Fleck and Schmitz 1993; Rouppe van der Voort et al. 2003; Vecchio et al. 2009; de la Cruz Rodríguez et al. 2013; Houston et al. 2018). Of course, it is possible to completely decompose non-linear signals using Fourier analysis, but the subsequent interpretation of the resulting amplitudes and phases is far from straightforward and needs to be treated with extreme caution (Lawton 1989).
On the other hand, non-stationary time series are notoriously difficult to predict and model (Tunnicliffe-Wilson 1989). A major challenge when applying Fourier techniques to non-stationary data is that the corresponding Fourier spectrum incorporates numerous additional harmonic components to replicate the inherent non-stationary behavior, which artificially spreads the true time series energy over an uncharacteristically wide frequency range (Terradas et al. 2004). Ideally, non-stationary data needs to be transformed into stationary data with a constant mean and variance that is independent of time. However, understanding the underlying systematic (acting according to a fixed plan or system; methodical) and stochastic (randomly determined; having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely) processes is often very difficult (Adhikari and Agrawal 2013). In particular, differencing can mitigate stochastic (i.e., non-systematic) processes to produce a difference-stationary time series, while detrending can help remove deterministic trends (e.g., time-dependent changes), but may struggle to alleviate stochastic processes (Pwasong and Sathasivam 2015). Hence, it is often very difficult to ensure observational time series are truly linear and stationary.
The upper-left panel of Fig. 6 displays an intensity time series (lightcurve) that has been extracted from a penumbral pixel in the chromospheric HARDcam H\(\alpha \) data. Here, the intensities have been normalized by the time-averaged quiescent H\(\alpha \) intensity. It can be seen in the upper-left panel of Fig. 6 that in addition to sinusoidal wave-like signatures, there also appears to be a background trend (i.e., moving average) associated with the intensities. Through visual inspection, this background trend does not appear linear, thus requiring higher order polynomials to accurately model and remove. It must be remembered that very high order polynomials will likely begin to show fluctuations on timescales characteristic of the wave signatures wishing to be studied. Hence, it is important that the lowest order polynomial that best fits the data trends is chosen to avoid contaminating the embedded wave-like signatures with additional fluctuations arising from high-order polynomials. Importantly, the precise method applied to detrend the data can vary depending upon the signal being analyzed (e.g., Edmonds and Webb 1972; Edmonds 1972; Krijger et al. 2001; Rutten and Krijger 2003; de Wijn et al. 2005a, b). For example, some researchers choose to subtract the mean trend, while others prefer to divide by the fitted trend then subtract ‘1’ from the subsequent time series. Both approaches result in a more stationary time series with a mean value of ‘0’. However, subtracting the mean preserves the original unit of measurement and hence the original shape of the time series (albeit with modified numerical axes labels), while dividing by the mean provides a final unit that is independent of the original measurement and thus provides a method to more readily visualize fractional changes to the original time series. It must be noted that detrending processes, regardless of which approach is selected, can help remove deterministic trends (e.g., time-dependent changes), but often struggle to alleviate stochastic processes from the resulting time series.
An H\(\alpha \) line core intensity time series (upper left; solid black line) extracted from a penumbral location of the HARDcam data described in Sect. 2.1.1. The intensities shown have been normalized by the time-averaged H\(\alpha \) intensity established in a quiet Sun region within the field-of-view. A dashed red line shows a third-order polynomial fitted to the lightcurve, which is designed to detrend the data to provide a stationary time series. The upper-right panel displays the resulting time series once the third-order polynomial trend line has been subtracted from the raw intensities (black line). The solid red line depicts an apodization filter designed to preserve 90% of the original lightcurve, but gradually reduce intensities to zero towards the edges of the time series to help alleviate any spurious signals in the resulting FFT. The lower panel reveals the final lightcurve that is ready for FFT analyses, which has been both detrended and apodized to help ensure the resulting Fourier power is accurately constrained. The horizontal dashed red lines signify the new mean value of the data, which is equal to zero due to the detrending employed
The dashed red line in the upper-left panel of Fig. 6 displays a third-order polynomial trend line fitted to the raw H\(\alpha \) time series. The line of best fit is relatively low order, yet still manages to trace the global time-dependent trend. Subtracting the trend line from the raw intensity lightcurve provides fluctuations about a constant mean equal to zero (upper-right panel of Fig. 6), helping to ensure the resulting time series is stationary. It can be seen that wave-like signatures are present in the lightcurve, particularly towards the start of the observing sequence, where fluctuations on the order of \(\approx 8{\%}\) of the continuum intensity are visible. However, it can also be seen from the right panel of Fig. 6 that between times of approximately 300–1300 s there still appears to be a local increase in the mean (albeit no change to the global mean, which remains zero). To suppress this local change in the mean, higher order polynomial trend lines could be fitted to the data, but it must be remembered that such fitting runs the risk of manipulating the true wave signal. Hence, for the purposes of this example, we will continue to employ third-order polynomial detrending, and make use of the time series shown in the upper-right panel of Fig. 6.
For data sequences that are already close to being stationary, one may question why the removal of such background trends is even necessary since the Fourier decomposition with naturally put the trend components into low-frequency bins. Of course, the quality and/or dynamics of the input time series will have major implications regarding what degree of polynomial is required to accurately transform the data into a stationary time series. However, from the perspective of wave investigations, non-zero means and/or slowly evolving backgrounds will inappropriately apply Fourier power across low frequencies, even though these are not directly wave related, which may inadvertently skew any subsequent frequency-integrated wave energy calculations performed. The sources of such non-stationary processes can be far-reaching, and include aspects related to structural evolution of the feature being examined, local observing conditions (e.g., changes in light levels for intensity measurements), and/or instrumental effects (e.g., thermal impacts that can lead to time-dependent variances in the measured quantities). As such, some of these sources (e.g., structural evolution) are dependent on the precise location being studied, while other sources (e.g., local changes in the light level incident on the telescope) are global effects that can be mapped and removed from the entire data sequence simultaneously. Hence, detrending the input time series helps to ensure that the resulting Fourier power is predominantly related to the embedded wave activity.
Another step commonly taken to ensure the reliability of subsequent Fourier analyses is to apply an apodization filter to the processed time series (Norton and Beer 1976). An Fourier transform assumes an infinite, periodically repeating sequence, hence leading to a looping behavior at the ends of the time series. Hence, an apodization filter is a function employed to smoothly bring a measured signal down to zero towards the extreme edges (i.e., beginning and end) of the time series, thus mitigating against sharp discontinuities that may arise in the form of false power (edge effect) signatures in the resulting power spectrum.
Typically, the apodization filter is governed by the percentage over which the user wishes to preserve the original time series. For example, a 90% apodization filter will preserve the middle 90% of the overall time series, with the initial and final 5% of the lightcurve being gradually tapered to zero (Dame and Martic 1987). There are many different forms of the apodization filter shape that can be utilized, including tapered cosines, boxcar, triangular, Gaussian, Lorentzian, and trapezoidal profiles, many of which are benchmarked using solar time series in Louis et al. (2015). A tapered cosine is the most common form of apodization filter found in solar physics literature (e.g., Hoekzema et al. 1998), and this is what we will employ here for the purposes of our example dataset. The upper-right panel of Fig. 6 reveals a 90% tapered cosine apodization filter overplotted on top of the detrended H\(\alpha \) lightcurve. Multiplying this apodization filter by the lightcurve results in the final detrended and apodized time series shown in the bottom panel of Fig. 6, where the stationary nature of this processed signal is now more suitable for Fourier analyses. It is worth noting that following successful detrending of the input time series, the apodization percentage chosen can often be reduced, since the detrending process will suppress any discontinuities arising at the edges of the data sequence (i.e., helps to alleviate spectral leakage; Nuttall 1981). As such, the apodization percentage employed may be refined based on the ratio between the amplitude of the (primary) oscillatory signal and the magnitude of the noise present within that signal (i.e., linked to the inherent signal-to-noise ratio; Stoica and Moses 2005; Carlson and Crilly 2010).
Performing a fast Fourier transform (FFT; Cooley and Tukey 1965) of the detrended time series provides a Fourier amplitude spectrum, which can be displayed as a function of frequency. An FFT is a computationally more efficient version of the discrete Fourier transform (DFT; Grünbaum 1982), which only requires \(N\log {N}\) operations to complete compared with the \(N^{2}\) operations needed for the DFT, where N is the number of data points in the time series, which can be calculated by dividing the time series duration by the acquisition cadence. Following a Fourier transform of the input data, the number of (non-negative) frequency bins, \(N_{f}\), can be computed by adding one to the number of samples (to account for the zeroth frequency representing the time series mean; Oppenheim and Schafer 2009), \(N+1\), dividing the result by a factor of two, before rounding to the nearest integer. The Nyquist frequency is the highest constituent frequency of an input time series that can be evaluated at a given sampling rate (Grenander 1959), and is defined as \(f_{\text{Ny}} = {\mathrm {sampling~rate}}/2 = 1/(2 \times {\text{cadence}})\). To evaluate the frequency resolution, \(\varDelta {f}\), of an input time series, one must divide the Nyquist frequency by the number of non-zero frequency bins (i.e., the number of steps between the zeroth and Nyquist frequencies, N/2), providing,
As a result, it is clear to see that the observing duration plays a pivotal role in the corresponding frequency resolution (see, e.g., Harvey 1985; Duvall et al. 1997; Gizon et al. 2017, for considerations in the helioseismology community). It is also important to note that the frequency bins remain equally spaced across the lowest (zeroth frequency or mean) to highest (Nyquist) frequency that is resolved in the corresponding Fourier spectrum. See Sect. 2.2.1 for a more detailed comparison between the terms involved in Fourier decomposition.
The HARDcam dataset utilized has a cadence of 1.78 s, which results in a Nyquist frequency of \(f_{{\text {Ny}}}\approx 280\,\text{mHz}\) \(\left( \frac{1}{2\times 1.78}\right) \). It is worth noting that only the positive frequencies are displayed in this review for ease of visualization. Following the application of Fourier techniques, both negative and positive frequencies, which are identical except for their sign, will be generated for the corresponding Fourier amplitudes. This is a consequence of the Euler relationship that allows sinusoidal wave signatures to be reconstructed from a set of positive and negative complex exponentials (Smith 2007). Since input time series are real valued (e.g., velocities, intensities, spectral line widths, magnetic field strengths, etc.) with no associated imaginary terms, then the Fourier amplitudes associated with the negative and positive frequencies will be identical. This results in the output Fourier transform being Hermitian symmetric (Napolitano 2020). As a result, the output Fourier amplitudes are often converted into a power spectrum (a measure of the square of the Fourier wave amplitude), or following normalization by the frequency resolution, into a power spectral density. This approach is summarized by Stull (1988), where the power spectral density, PSD, can be calculated as,
In Eq. (4), \({\mathcal {F}}_{A}(n)\) is the Fourier amplitude for any given positive frequency, n, while \(\varDelta f\) is the corresponding frequency resolution of the Fourier transform (see definition above and further discussion points in Sect. 2.2.1). Note that the factor of ‘2’ is required due to the wrapping of identical Fourier power at negative frequencies into the positive domain. The normalization of the power spectrum by the frequency resolution is a best practice to ensure that the subsequent plots can be readily compared against other data sequences that may be acquired across shorter or longer observing intervals, hence affecting the intrinsic frequency resolution (see Sect. 2.2.1). As an example, the power spectral density of an input velocity time series, with units of km/s, will have the associated units of km\(^{2}\)/s\(^{2}\)/mHz (e.g., Stangalini et al. 2021b). The power spectral density for the detrended HARDcam H\(\alpha \) time series is depicted in the lower-middle panel of Fig. 7. Here, the intensity time series is calibrated into normalized data number (DN) units, which are often equally labeled as ‘counts’ in the literature. Hence, the resulting power spectral density has units of DN\(^{2}\)/mHz.
Taking the raw HARDcam H\(\alpha \) lightcurve shown in the upper-left panel of Fig. 6, the upper row displays the resultant detrended time series utilizing linear (left), third-order polynomial (middle), and nineth-order polynomial (right) fits to the data. In each panel the dashed red line highlights the line of best fit, while the dashed blue line indicates the resultant data mean that is equal to zero following detrending. The lower row displays the corresponding Fourier power spectral densities for each of the linear (left), third-order polynomial (middle), and nineth-order polynomial detrended time series. Changes to the power spectral densities are particularly evident at low frequencies
An additional step often employed following the calculation of the PSD of an input time series is to remove the Fourier components associated with noise. It can be seen in the lower panels of Fig. 7 that there is a flattening of power towards higher frequencies, which is often due to the white noise that dominates the signal at those frequencies (Hoyng 1976; Krishna Prasad et al. 2017). Here, white noise is defined as fluctuations in a time series that give rise to equal Fourier power across all frequencies, hence giving rise to a flat PSD (Bendat and Piersol 2011). Often, if white noise is believed to be the dominant source of noise in the data (i.e., the signal is well above the detector background noise, hence providing sufficient photon statistics so that photon noise is the dominant source of fluctuations), then its PSD can be estimated by applying Eq. (4) to a random light curve generated following a Poisson distribution, with an amplitude equivalent to the square root of the mean intensity of the time series (Fossum and Carlsson 2005a; Lawrence et al. 2011). Subtraction of the background noise is necessary when deriving, for example, the total power of an oscillation isolated in a specific frequency window (Vaseghi 2008). Other types of noise exist that have discernible power-law slopes associated with their PSDs as a function of frequency. For example, while white noise has a flat power-law slope, pink and red noise display 1/f and \(1/f^{2}\) power-law slopes, respectively, resulting in larger amplitudes at lower frequencies (Kolotkov et al. 2016; Strekalova et al. 2018). The specific dominant noise profile must be understood before it is subtracted from the relevant data PSDs.
As a result of the detrending employed in Fig. 6, the absolute Fourier wave amplitude related to a frequency of 0 Hz (i.e., representing the time series mean; upper panel of Fig. 8) is very low; some 4 orders-of-magnitude lower than the power associated with white noise signatures at high frequencies. Of course, if the processed time series mean is exactly zero, then the Fourier wave amplitude at 0 Hz should also be zero. In the case of Fig. 8, the detrended time series does have a zero mean. However, because the time series is not antisymmetric about the central time value, it means that the application of the tapered cosine apodization function results in a very small shift in the time series mean away from the zero value. As a result, the subsequent Fourier amplitudes are fractionally (e.g., at the \(10^{-8}\) level for the upper panel of Fig. 8) above the zero point. Once the processes of detrending and apodization are complete, it is possible to re-calculate the time series mean and subtract this value to ensure the processed mean remains zero before the application of Fourier analyses. However, for the purposes of Figs. 7 and 8, this additional mean subtraction has not been performed to better highlight this potential artifact at the lowest temporal frequencies.
Fourier power spectrum of the HARDcam H\(\alpha \) detrended lightcurve shown in the lower panel of Fig. 6 (top). For the purposes of wave filtering, a step function is shown on the Fourier spectrum using a dashed red line (middle left), where the step function equals unity between frequencies spanning 3.7–5.7 mHz (i.e., \(4.7\pm 1.0\) mHz). Multiplying the Fourier power spectrum by this step function results in isolated power features, which are displayed in the middle-right panel. Alternatively, a Gaussian function centered on 4.7 mHz, with a FWHM of 2.0 mHz, is overplotted on top of the Fourier power spectrum using a red line in the lower-left panel. Multiplying the power spectrum by the Gaussian function results in similar isolated power features, shown in the lower-right panel, but with greater apodization of edge frequencies to help reduce aliasing upon reconstruction of the filtered time series
Note that Fig. 8 does not have the frequency axis displayed on a log-scale in order to reveal the 0 Hz component. As such, the upper frequency range is truncated to \(\approx 28\) Hz to better reveal the signatures present at the lower frequencies synonymous with wave activity in the solar atmosphere. The suppression of Fourier wave amplitudes at the lowest frequencies suggests that the third-order polynomial trend line fitted to the raw H\(\alpha \) intensities is useful at removing global trends in the visible time series. However, as discussed above, care must be taken when selecting the polynomial order to ensure that the line of best fit does not interfere with the real wave signatures present in the original lightcurve. To show the subtle, yet important impacts of choosing a suitable trend line, Fig. 7 displays the resultant detrended time series of the original HARDcam H\(\alpha \) lightcurve for three different detrending methods, e.g., the subtraction of a linear, a third-order polynomial, and a nineth-order polynomial line of best fit. It can be seen from the upper panels of Fig. 7 that the resultant (detrended) lightcurves have different perturbations away from the new data mean of zero. This translates into different Fourier signatures in the corresponding power spectral densities (lower panels of Fig. 7), which are most apparent at the lowest frequencies (e.g., \(<3\) mHz). Therefore, it is clear that care must be taken when selecting the chosen order of the line of best fit so that it doesn’t artificially suppress true wave signatures that reside in the time series. It can be seen in the lower-middle panel of Fig. 7 that the largest Fourier power signal is at a frequency of \(\approx 4.7\) mHz, corresponding to a periodicity of \(\approx 210\) s, which is consistent with previous studies of chromospheric wave activity in the vicinity of sunspots (e.g., Felipe et al. 2010; Jess et al. 2013; López Ariste et al. 2016, to name but a few examples).
2.2.1 Common misconceptions involving Fourier space
Translating a time series into the frequency-dependent domain through the application of a Fourier transform is a powerful diagnostic tool for analyzing the frequency content of (stationary) time series. However, when translating between the temporal and frequency domains it becomes easy to overlook the importance of the sampling cadence and the time series duration in the corresponding frequency axis. For example, one common misunderstanding is the belief that increasing the sampling rate of the data (e.g., increasing the frame rate of the observations from 10 frames per second to 100 frames per second) will improve the subsequent frequency resolution of the corresponding Fourier transform. Unfortunately, this is not the case, since increasing the frame rate raises the Nyquist frequency (highest frequency component that can be evaluated), but does not affect the frequency resolution of the Fourier transform. Instead, to improve the frequency resolution one must obtain a longer-duration time series or employ ‘padding’ of the utilized lightcurve to increase the number of data points spanning the frequency domain (Lyons 1996).
To put these aspects into better context, we will outline a worked example that conveys the importance of both time series cadence and duration. Let us consider two complementary data sequences, one from the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) onboard the SDO spacecraft, and one from the 4 m ground-based Daniel K. Inouye Solar Telescope (DKIST; Tritschler et al. 2016; Rimmele et al. 2020; Rast et al. 2021). Researchers undertaking a multi-wavelength investigation of wave activity in the solar atmosphere may choose to employ these types of complementary observations in order to address their science objectives. Here, the AIA/SDO observations consist of 3 h (10,800 s) of 304 Å images taken at a cadence of 12.0 s, while the DKIST observations comprise of 1 h (3600 s) of H\(\alpha \) observations taken by the Visual Broadband Imager (VBI; Wöger 2014) at a cadence of 3.2 s.
The number of samples, N, for each of the time series can be calculated as \(N_{\text{AIA}} = 10{,}800 / 12.0 = 900\) and \(N_{\text{VBI}} = 3600 / 3.2 = 1125\). Therefore, it is clear that even though the AIA/SDO observations are obtained over a longer time duration, the higher cadence of the VBI/DKIST observations results in more samples associated with that data sequence. The number of frequency bins, \(N_{f}\), can also be computed as \(N_{f({\text{AIA}})} = (900+1) / 2 = 451\), while \({N_{f({\text{VBI}})} = (1125+1) / 2 = 563}\). Hence, the frequency axes of the corresponding Fourier transforms will be comprised of 451 and 563 positive real frequencies (i.e., \(\ge 0\) Hz) for the AIA/SDO and VBI/DKIST data, respectively. The increased number of frequency bins for the higher cadence VBI/DKIST observations sometimes leads to the belief that this provides a higher frequency resolution. However, we have not yet considered the effect of the image cadence on the corresponding frequency axes.
In the case of the AIA/SDO and VBI/DKIST observations introduced above, the corresponding Nyquist frequencies can be computed as \(f_{\mathrm {Ny(AIA)}} = 1/(2 \times 12.0) \approx 42\) mHz and \({f_{\mathrm {Ny(VBI)}} = 1/(2 \times 3.2) \approx 156}\) mHz, respectively. As a result, it should become clear that while the VBI/DKIST observations result in a larger number of corresponding frequency bins (i.e., \(N_{f({\text{VBI}})} > N_{f({\text{AIA}})}\)), these frequency bins are required to cover a larger frequency interval up to the calculated Nyquist value. Subsequently, for the case of the AIA/SDO and VBI/DKIST observations, the corresponding frequency resolutions can be calculated as \({\varDelta {f}_{\text{AIA}} = 1/10{,}800 = 0.0926}\) mHz and \({\varDelta {f}_{\text{VBI}} = 1/3600 = 0.2778}\) mHz, respectively. Note that while the frequency resolution is constant, the same cannot be said for the period resolution due to the reciprocal nature between these variables. For example, at a frequency of 3.3 mHz (\(\approx 5\) min oscillation), the period resolution for VBI/DKIST is \(\approx 25\) s (i.e., \(\approx 303\pm 25\) s), while for AIA/SDO the period resolution is \(\approx 8\) s (i.e., \(\approx 303\pm 8\) s). Similarly, at a frequency of 5.6 mHz (\(\approx 3\) min oscillation), the period resolutions for VBI/DKIST and AIA/SDO are \(\approx 9\) s (i.e., \(\approx 180\pm 9\) s) and \(\approx 3\) s (i.e., \(\approx 180\pm 3\) s), respectively.
Figure 9 depicts the Fourier frequencies (left panel), and their corresponding periodicities (right panel), as a function of the derived frequency bin. It can be seen from the left panel of Fig. 9 that the AIA/SDO observations produce a lower number of frequency bins (i.e., a result of less samples, \(N_{\text{AIA}} < N_{\text{VBI}}\)), alongside a smaller peak frequency value (i.e., a lower Nyquist frequency, \({f_{\mathrm {Ny(AIA)}}} < {f_{\mathrm {Ny(VBI)}}}\), caused by the lower temporal cadence). However, as a result of the longer duration observing sequence for the AIA/SDO time series (i.e., 3 h for AIA/SDO versus 1 h for VBI/DKIST), the resulting frequency resolution is better (i.e., \({\varDelta {f}_{\text{AIA}}} < {\varDelta {f}_{\text{VBI}}}\)), allowing more precise frequency-dependent phenomena to be uncovered in the AIA/SDO observations. Of course, due to the AIA/SDO cadence being longer than that of VBI/DKIST (i.e., 12.0 s for AIA/SDO versus 3.2 s for VBI/DKIST), this results in the inability to examine the fastest wave fluctuations, which can be seen more clearly in the right panel of Fig. 9, whereby the VBI/DKIST observations are able to reach lower periodicities when compared to the complementary AIA/SDO data sequence. The above scenario is designed to highlight the important interplay between observing cadences and durations with regards to the quantitative parameters achievable through the application of Fourier transforms. For example, if obtaining the highest possible frequency resolution is of paramount importance to segregate closely matched wave frequencies, then it is the overall duration of the time series (not the observing cadence) that facilitates the necessary frequency resolution.
The frequencies (left panel) and corresponding periodicities (right panel) that can be measured through the application of Fourier analysis to an input time series. Here, the solid blue lines depict AIA/SDO observations spanning a 3 h duration and acquired with a temporal cadence of 12.0 s, while the solid red lines highlight VBI/DKIST observations spanning a 1 h window and acquired with a temporal cadence of 3.2 s. It can be seen that both the cadence and observing duration play pivotal roles in the resulting frequencies/periodicities achievable, with the longer duration AIA/SDO observations providing a better frequency resolution, \(\varDelta {f}\), while the higher cadence VBI/DKIST data results in a better Nyquist frequency that allows more rapid wave fluctuations to be studied. In the left and right panels, the dashed blue and red lines depict the Nyquist frequencies and corresponding periodicities for the AIA/SDO and VBI/DKIST data sequences, respectively (see text for more information)
Another important aspect to keep in mind is that the Fourier spectrum is only an estimate of the real power spectrum of the studied process. The finite-duration time series, noise, and distortions due to the intrinsic covariance within each frequency bin may lead to spurious peaks in the spectrum, which could be wrongly interpreted as real oscillations. As a result, one may believe that by considering longer time series the covariance of each frequency bin will reduce, but this is not true since the bin width itself becomes narrower. One way forward is to divide the time series into different segments and average the resulting Fourier spectra calculated from each sub-division—the so-called Welch method (Welch 1967), at the cost of reducing the resolution of frequencies explored. However, data from ground-based observatories are generally limited to 1–2 h each day, and it is not always possible to obtain such long time series. Therefore, special attention must be paid when interpreting the results.
It is also possible to artificially increase the duration of the input time series through the process known as ‘padding’ (Ransom et al. 2002), which has been employed across a wide range of solar studies incorporating the photosphere, chromosphere, and corona (e.g., Ballester et al. 2002; Auchère et al. 2016; Hartlep and Zhao 2021; Jafarzadeh et al. 2021). Here, the beginning and/or end of the input data sequence is appended with a large number of data points with values equal to the mean of the overall time series. The padding adds no additional power to the data, but it acts to increase the fine-scale structure present in the corresponding Fourier transform since the overall duration of the data has been artificially increased. Note that padding with the data mean is preferable to padding with zeros since this alleviates the introduction of low-frequency power into the subsequent Fourier transform. Of course, if the input time series had previously been detrended (see Sect. 2.2) so that the resulting mean of the data is zero, then zero-padding and padding with the time series mean are equivalent.
Note that the process of padding is often perceived to increase the usable Fourier frequency resolution of the dataset, which is unfortunately incorrect. The use of padded time series acts to reveal small-scale structure in the output Fourier transform, but as it does not add any real signal to the input data sequence, the frequency resolution remains governed by the original time series characteristics (Eriksson 1998). As such, padding cannot recover and/or recreate any missing information in the original data sequence. This effect can be visualized in Fig. 10. Here, a resultant wave consisting of two sinusoids with normalized frequencies 0.075 and 0.125 of the sampling frequency is cropped to 32 and 64 data points in length. Figure 10a shows the corresponding power spectral density (PSD) following Fourier transformation on both the raw 32 data samples array (solid black line with circular data points) and the original 32 data point array that has been padded to a total of 64 data points (dashed black line with crosses). In addition, Fig. 10b shows another PSD for the data array containing 64 input samples (solid black line with circular data points), alongside the same PSD for the original 32 data point array that has been padded to a total of 64 data points (dashed black line with crosses; same as Fig. 10a). From Fig. 10a it can be seen that while the padding increases the number of data points along the frequency axis (and therefore creates some additional small-scale fluctuations in the resulting PSD), it does not increase the frequency resolution to a value needed to accurately identify the two sinusoidal components. This is even more apparent in Fig. 10b, where the Fourier transform of the time series containing 64 data points now contains sufficient information and frequency resolution to begin to segregate the two sinusoidal components. The padded array (32 data points plus 32 padded samples) contains the same number of elements along the frequency axis, but does not increase the frequency resolution to allow the quantification of the two embedded wave frequencies. The use of padding is often employed to decrease the computational time. Indeed, FFT algorithms work more efficiently if the number of samples is an integer power of 2.

Image reproduced with permission from Eriksson (1998)
Panels revealing the effect of padding an input time series on the resulting Fourier transform. For this example, two sinusoids are superimposed with normalized frequencies equal to 0.075 and 0.125 of the sampling frequency. Panels a, b show the resulting power spectral densities (PSDs) following the Fourier transforms of 32 input data points (solid black line with circular data points; left) and 64 input data points (solid black line with circular data points; right), respectively. In both panels, the dashed black lines with crosses represent the Fourier transforms of 32 input data points that have been padded to a total of 64 data points. It can be seen that the increased number of data points associated with the padded array results in more samples along the frequency axis, but this does not improve the frequency resolution to the level consistent with supplying 64 genuine input samples (solid black line in the right panel).
Of course, while data padding strictly does not add usable information into the original time series, it can be utilized to provide better visual segregation of closely spaced frequencies. To show an example of this application, Fig. 11 displays the effects of padding and time series duration in a similar format to Fig. 10. In Fig. 11, the upper-left panel shows an intensity time series that is created from the superposition of two closely spaced frequencies, here 5.0 mHz and 5.4 mHz. The resultant time series is \(\approx 3275\) s (\(\sim 55\) min) long, and constructed with a cadence of 3.2 s to remain consistent with the VBI/DKIST examples shown earlier in this section. The absolute extent of this 3275 s time series is bounded in the upper-left panel of Fig. 11 by the shaded orange background. In order to pad this lightcurve, a new time series is constructed that has twice as many data points in length, making the time series duration now \(\approx 6550\) s (\(\sim 110\) min). The original \(\approx 3275\) s lightcurve is placed in the middle of the new (expanded) array, thus providing zero-padding at the start and end of the time series. The corresponding power spectral densities (PSDs) for both the original and padded time series are shown in the lower-left panel of Fig. 11 using black and red lines, respectively. Note that the frequency axis is cropped to the range of 1–10 mHz for better visual clarity. It is clear that the original input time series creates a broad spectral peak at \(\approx 5\) mHz, but the individual 5.0 mHz and 5.4 mHz components are not visible in the corresponding PSD (solid black line in the lower-left panel of Fig. 11). On the other hand, the PSD from the padded array (solid red line in the lower-left panel of Fig. 11) does show a double peak corresponding to the 5.0 mHz and 5.4 mHz wave components, highlighting how such padding techniques can help segregate multi-frequency wave signatures.
Upper left: Inside the shaded orange region is a synthetic lightcurve created from the superposition of 5.0 mHz and 5.4 mHz waves, which are generated with a 3.2 s cadence (i.e., from VBI/DKIST) over a duration of \(\approx 3275\) s. This time series is zero-padded into a \(\approx 6550\) s array, which is displayed in its entirety in the upper-left panel using a solid black line. Upper right: The same resultant waveform created from the superposition of 5.0 mHz and 5.4 mHz waves, only now generated for the full \(\approx 6550\) s time series duration (i.e., no zero-padding required). Lower left: The power spectral density (PSD) of the original (un-padded) lightcurve is shown using a solid black line, while the solid red line reveals the PSD of the full zero-padded time series. It is clear that the padded array offers better visual segregation of the two embedded wave frequencies. Lower right: The PSDs for both the full \(\approx 6550\) s time series (solid black line) and the zero-padded original lightcurve (solid red line; same as that depicted in the lower-left panel). It can be seen that while the padded array provides some segregation of the 5.0 mHz and 5.4 mHz wave components, there is no better substitute at achieving high frequency resolution than obtaining long-duration observing sequences. Note that both PSD panels have the frequency axis truncated between 1 and 10 mHz for better visual clarity
Of course, padding cannot be considered a universal substitute for a longer duration data sequence. The upper-right panel of Fig. 11 shows the same input wave frequencies (5.0 mHz and 5.4 mHz), only with the resultant wave now present throughout the full \(\sim 110\) min time sequence. Here, the beat pattern created by the superposition of two closely spaced frequencies can be readily seen, which is a physical manifestion of wave interactions also studied in high-resolution observations of the lower solar atmosphere (e.g., Krishna Prasad et al. 2015). The resulting PSD of the full-duration time series is depicted in the lower-right panel of Fig. 11 using a solid black line. For comparison, the PSD constructed from the padded original lightcurve is also overplotted using a solid red line (same as shown using a solid red line in the lower-left panel of Fig. 11). It is clearly seen that the presence of the wave signal across the full time series provides the most prominent segregation of the 5.0 mHz and 5.4 mHz spectral peaks. While these peaks are also visible in the padded PSD (solid red line), they are less well defined, hence reiterating that while time series padding can help provide better isolation of closely spaced frequencies, there is no better candidate for high frequency resolution than long duration observing sequences.
On the other hand, if rapidly fluctuating waveforms are wanting to be studied, then achieving a high Nyquist frequency is necessary to achieve these objectives, which the duration of the observing sequence is unable to assist with. Hence, it is important to tailor the observing strategy to ensure the frequency requirements are met. This, of course, can present challenges for particular facilities. For example, if a frequency resolution of \(\varDelta {f} \approx 35~\mu \text{Hz}\) is required (e.g., to probe the sub-second timescales of physical processes affecting frequency distributions in the aftermath of solar flares; Wiśniewska et al. 2019), this would require an observing duration of approximately 8 continuous hours, which may not be feasible from ground-based observatories that are impacted by variable weather and seeing conditions. Similarly, while space-borne satellites may be unaffected by weather and atmospheric seeing, these facilities may not possess a sufficiently large telescope aperture to probe the wave characteristics of small-scale magnetic elements (e.g., Chitta et al. 2012b; Van Kooten and Cranmer 2017; Keys et al. 2018) and naturally have reduced onboard storage and/or telemetry restrictions, thus creating difficulties obtaining 8 continuous hours of observations at maximum acquisition cadences. Hence, complementary data products, including ground-based observations at high cadence and overlapping space-borne data acquired over long time durations, are often a good compromise to help provide the frequency characteristics necessary to achieve the science goals. Of course, next-generation satellite facilities, including the recently commissioned Solar Orbiter (Müller et al. 2013, 2020) and the upcoming Solar-C (Shimizu et al. 2020) missions, will provide breakthrough technological advancements to enable longer duration and higher cadence observations of the lower solar atmosphere than previously obtained from space. Another alternative to achieve both long time-series and high-cadence observations is the use of balloon-borne observatories, including the Sunrise (Bello González et al. 2010b) and Flare Genesis (Murphy et al. 1996; Bernasconi et al. 2000) experiments, where the data are stored in onboard discs. Such missions, however, have their own challenges and are limited to only a couple of days of observations during each flight.
2.2.2 Calculating confidence levels
After displaying Fourier spectra, it is often difficult to pinpoint exactly what features are significant, and what power spikes may be the result of noise and/or spurious signals contained within the input time series. A robust method of determining the confidence level of individual power peaks is to compare the Fourier transform of the input time series with the Fourier transform of a large number (often exceeding 1000) of randomized lightcurves based on the original values (i.e., ensuring an identical distribution of intensities throughout the new randomized time series; O’Shea et al. 2001). Following the randomization and computation of FFTs of the new time series, the probability, p, of randomized fluctuations being able to reproduce a given Fourier power peak in the original spectrum can be calculated. To do this, the Fourier power at each frequency element is compared to the power value calculated for the original time series, with the proportion of permutations giving a Fourier power value greater than, or equal to, the power associated with the original time series providing an estimate of the probability, p. Here, a small value of p suggests that the original lightcurve contains real oscillatory phenomena, while a large value of p indicates that there is little (or no) real periodicities contained within the data (Banerjee et al. 2001; O’Shea et al. 2001). Indeed, it is worth bearing in mind that probability values of \(p=0.5\) are consistent with noise fluctuations (i.e., the variance of a binomial distribution is greatest at \(p=0.5\); Lyden et al. 2019), hence why the identification of real oscillations requires small values of p.
Following the calculation of the probability, p, the value can be reversed to provide a percentage probability that the detected oscillatory phenomenon is real, through the relationship,
Here, \(p_{\text {real}}=100\%\) would suggest that the wave motion present in the original time series is real, since no (i.e., \(p=0\)) randomized time series provided similar (or greater) Fourier power. Contrarily, \(p_{\text {real}}=0\%\) would indicate a real (i.e., statistically significant) power deficit at that frequency, since all (i.e., \(p=1\)) randomized time series provided higher Fourier power at that specific frequency. Finally, a value of \(p_{\text {real}} = 50\%\) would indicate that the power peak is not due to actual oscillatory motions. A similar approach is to calculate the means and standard deviations of the Fourier power values for each independent frequency corresponding to the randomized time series. This provides a direct estimate of whether the original measured Fourier power is within some number of standard deviations of the mean randomized-data power density. As a result, probability estimations of the detected Fourier peaks can be estimated providing the variances and means of the randomized Fourier power values are independent (i.e., follow a normal distribution; Bell et al. 2018).
If a large number (\(\ge 1000\)) of randomized permutations are employed, then the fluctuation probabilities will tend to Gaussian statistics (Linnell Nemec and Nemec 1985; Delouille et al. 2008; Jess et al. 2019). In this case, the confidence level can be obtained using a standardized Gaussian distribution. For many solar applications (e.g., McAteer et al. 2002b, 2003; Andic 2008; Bello González et al. 2009; Stangalini et al. 2012; Dorotovič et al. 2014; Freij et al. 2016; Jafarzadeh et al. 2017d, to name but a few examples), a confidence level of 95% is typically employed as a threshold for reliable wave detection. In this case, \(99\% \le p_{\text {real}} \le 100\%\) (or \(0.00 \le p \le 0.01\)) is required to satisfy the desired 95% confidence level.
To demonstrate a worked example, we utilize the HARDcam H\(\alpha \) time series shown in the left panel of Fig. 6, which consists of 2528 individual time steps. This, combined with 1000 randomized permutations of the lightcurve, provides 1000 FFTs with 1000 different measures in each frequency bin; more than sufficient to allow the accurate use of Gaussian number statistics (Montgomery and Runger 2003). For each randomization, the resulting Fourier spectrum is compared to that depicted in the upper panel of Fig. 8, with the resulting percentage probabilities, \(p_{\text {real}}\), calculated according to Eq. (5) for each of the temporal frequencies. The original Fourier power spectrum, along with the percentage probabilities for each corresponding frequency, are shown in the left panel of Fig. 12. It can be seen that the largest power signal at \(\approx 4.7\) mHz (\(\approx 210\) s) has a high probability, suggesting that this is a detection of a real oscillation. Furthermore, the neighboring frequencies also have probabilities above 99%, further strengthening the interpretation that wave motion is present in the input time series. It should be noted that with potentially thousands of frequency bins in the high-frequency regime of an FFT, having some fraction of points that exceed a 95% (or even 99%) confidence interval is to be expected. Therefore, many investigations also demand some degree of coherency in the frequency and/or spatial distributions to better verify the presence of a real wave signal (similar to the methods described by Durrant and Nesis 1982; Di Matteo and Villante 2018). To better highlight which frequencies demonstrate confidence levels exceeding 95%, the right panel of Fig. 12 overplots (using bold red crosses) those frequencies containing percentage probabilities in excess of 99%.
The full frequency extent of the Fourier power spectral densities shown in the lower-middle panel of Fig. 7, displayed using a log–log scale for better visual clarity (left panel). Overplotted using a solid red line are the percentage probabilities, \(p_{\text {real}}\), computed over 1000 randomized permutations of the input lightcurve. Here, any frequencies with \(p_{\text {real}} \ge 99\%\) correspond to a statistical confidence level in excess of 95%. The same Fourier power spectral density is shown in the right panel, only now with red crossed symbols highlighting the locations where the Fourier power provides confidence levels greater than 95%
2.2.3 Lomb-scargle techniques
A requirement for the implementation of traditional Fourier-based analyses is that the input time series is regularly and evenly sampled. This means that each data point of the lightcurve used should be obtained using the same exposure time, and subsequent time steps should be acquired with a strict, uniform cadence. Many ground-based and space-borne instruments employ digital synchronization triggers for their camera systems that can bring timing uncertainties down to the order of \(10^{-6}\) s (Jess et al. 2010b), which is often necessary in high-precision polarimetric studies (Kootz 2018). This helps to ensure the output measurements are sufficiently sampled for the application of Fourier techniques.
However, often it is not possible to obtain time series with strict and even temporal sampling. For example, raster scans using slit-based spectrographs can lead to irregularly sampled observations due to the physical times required to move the spectral slit.Footnote 1 Also, some observing strategies interrupt regularly sampled data series for the measurement of Stokes I/Q/U/V signals every few minutes, hence introducing data gaps during these times (e.g., Samanta et al. 2016). Furthermore, hardware requiring multiple clocks to control components of the same instrument (e.g., the mission data processor and the polarization modulator unit on board the Hinode spacecraft; Kosugi et al. 2007) may have a tendency to drift away from one another, hence effecting the regularity of long-duration data sequences (Sekii et al. 2007). In addition, some facilities including the Atacama Large Millimeter/submillimeter Array (ALMA; Wootten and Thompson 2009; Wedemeyer et al. 2016) require routine calibrations that must be performed approximately every 10 min (with each calibration taking \(\sim 2.5\) min; Wedemeyer et al. 2020), hence introducing gaps in the final time series (Jafarzadeh et al. 2021). Finally, in the case of ground-based observations, a period of reduced seeing quality or the passing of a localized cloud will result in a number of compromised science frames, which require removal and subsequent interpolation (Krishna Prasad et al. 2016).
If the effect of data sampling irregularities is not believed to be significant (i.e., is a fraction of the wave periodicities expected), then it is common to interpolate the observations back on to a constant cadence grid (e.g., Jess et al. 2012c; Kontogiannis et al. 2016). Of course, how the data points are interpolated (e.g., linear or cubic fitting) may effect the final product, and as a result, care should be taken when interpolating time series so that artificial periodicities are not introduced to the data through inappropriate interpolation. This is particularly important when the data sequence requires subsequent processing, e.g., taking the derivative of a velocity time series to determine the acceleration characteristics of the plasma. Under these circumstances, inappropriate interpolation of the velocity information may have drastic implications for the derived acceleration data. For this form of analysis, the use of 3-point Lagrangian interpolation is often recommended to ensure the endpoints of the time series remain unaffected due to the use of error propagation formulae (Veronig et al. 2008). However, in the case for very low cadence data, 3-point Lagrangian interpolation may become untrustworthy due to the large temporal separation between successive time steps (Byrne et al. 2013). For these cases, a Savtizky–Golay (Savitzky and Golay 1964) smoothing filter can help alleviate sharp (and often misleading) kinematic values (Byrne 2015).
If interpolation of missing data points and subsequent Fourier analyses is not believed to be suitable, then Lomb–Scargle techniques (Lomb 1976; Scargle 1982) can be implemented. As overviewed by Zechmeister and Kürster (2009), the Lomb–Scargle algorithms are useful for characterizing periodicities present in unevenly sampled data products. Often, least-squares minimization processes assume that the data to be fitted are normally distributed (Barret and Vaughan 2012), which may be untrue since the spectrum of a linear, stationary stochastic process naturally follows a \(\chi _{2}^{2}\) distribution (Groth 1975; Papadakis and Lawrence 1993). However, a benefit of implementing the Lomb–Scargle algorithms is that the noise at each individual frequency can be represented by a \(\chi ^{2}\) distribution, which is equivalent to a spectrum being reliably derived from more simplistic least-squares analysis techniques (VanderPlas 2018).
Crucially, Lomb–Scargle techniques differ from conventional Fourier analyses by the way in which the corresponding spectra are computed. While Fourier-based algorithms compute the power spectrum by taking dot products of the input time series with pairs of sine- and cosine-based waveforms, Lomb–Scargle techniques attempt to first calculate a delay timescale so that the sinusoidal pairs are mutually orthogonal at discrete sample steps, hence providing better power estimates at each frequency without the strict requirement of evenly sampled data (Press et al. 2007). In the field of solar physics, Lomb–Scargle techniques tend to be more commonplace in investigations of long-duration periodicities spanning days to months (i.e., often coupled to the solar cycle; Ni et al. 2012; Deng et al. 2017), although they can be used effectively in shorter duration observations where interpolation is deemed inappropriate (e.g., Maurya et al. 2013).
2.2.4 One-dimensional Fourier filtering
Often, it is helpful to filter the time series in order to isolate specific wave signatures across a particular range of frequencies. This is useful for a variety of studies, including the identification of beat frequencies (Krishna Prasad et al. 2015), the more reliable measurement of phase variations between different wavelengths/filters (Krishna Prasad et al. 2017), and in the identification of various wave modes co-existing within single magnetic structures (Keys et al. 2018). From examination of the upper panel of Figs. 8 and 12, it is clear that the frequency associated with peak Fourier power is \(\approx 4.7\) mHz, and is accompanied by high confidence levels exceeding 95%.
If we wish to reconstruct a filtered time series centered on this dominant frequency, then we have a number of options available. The dashed red line in the middle-left panel of Fig. 8 depicts a step function frequency range of \(4.7\pm 1.0\) mHz, whereby the filter is assigned values of ‘1’ and ‘0’ for frequencies inside and outside, respectively, this chosen frequency range. Multiplying the Fourier power spectrum by this step function frequency filter results in the preserved power elements that are shown in the middle-right panel of Fig. 8, which can be passed through an inverse FFT to create a Fourier filtered time series in the range of \(4.7\pm 1.0\) mHz. However, by employing a step function frequency filter, there is a sharp and distinct transition between elevated power signals and frequencies with zero Fourier power. This abrupt transition can create aliasing artifacts in the reconstructed time series (Gobbi et al. 2006). Alternatively, to help mitigate against aliasing (i.e., sharp Fourier power transitions at the boundaries of the chosen frequency range), the Fourier power spectrum can be multiplied by a filter that peaks at the desired frequency, before gradually reducing in transmission towards the edges of the frequency range. An example of such a smoothly varying filter is documented in the lower panels of Fig. 8, where a Gaussian centered at 4.7 mHz, with a full-width at half-maximum (FWHM) of 2 mHz, is overplotted on top of the Fourier spectrum using a solid red line, which can be multiplied by the original Fourier spectrum to gradually decrease the power down to zero at the edges of the desired frequency range (lower-right panel of Fig. 8). Performing an inverse FFT on this filtered Fourier power spectrum results in the reconstruction of an H\(\alpha \) lightcurve containing dominant periodicities of \(\approx 210\) s, which can be seen in Fig. 13. This process is identical to convolving the detrended intensity time series with the given Gaussian frequency filter, but we perform this process step-by-step here for the purposes of clarity.
The original HARDcam time series (upper solid black line), normalized by the quiescent H\(\alpha \) continuum intensity, and displayed as a function of time. The lower solid black line is a Fourier filtered lightcurve, which has been detrended using a third-order polynomial (right panel of Fig. 6), convolved with a Gaussian frequency filter centered on 4.7 mHz with a FWHM of 2.0 mHz (lower-right panel of Fig. 8), before applying an inverse FFT to reconstruct the filtered time series. For visual clarity, the filtered lightcurve has been offset to bring it closer to the original time series intensities
It must be noted that here we employ a Gaussian frequency filter to smoothly transition the Fourier power to values of zero outside of the desired frequency range. However, other filter shapes can also be chosen, including Lorentzian, Voigt, or even custom profile shapes depending upon the level of smoothing required by the investigators. At present, there is no firm consensus regarding which filter profile shape is best to use, so it may be necessary to choose the frequency filter based upon the specifics of the data being investigated, e.g., the frequency resolution, the amplitude of the spectral components wishing to be studied, the width of the documented Fourier peaks, etc. Of course, we must remind the reader that isolating a relatively limited range of frequencies in Fourier space and transforming these back into real (temporal) space will always result in the appearance of a periodic signal at the frequency of interest, even if the derived Fourier transform was originally noise dominated. Therefore, it is necessary to combine confidence interval analysis (see Sect. 2.2.2) with such Fourier filtering techniques to ensure that only statistically significant power is being considered in subsequent analyses.
2.2.5 Fourier phase lag analysis
Many observational datasets will be comprised of a combination of multi-wavelength and/or multi-component spectral measurements. For example, the Rapid Oscillations in the Solar Atmosphere (ROSA; Jess et al. 2010b) instrument at the DST is able to observe simultaneously in six separate bandpasses. It is common practice to acquire contemporaneous imaging observations through a combination of G-band, 3500 Å and 4170 Å broadband continuum filters, in addition to Ca ii K, Na i D\(_{1}\), and H\(\alpha \) narrowband filters, which allows wave signatures to be studied from the depths of the photosphere through to the base of the transition region (e.g., Morton et al. 2011, 2012; Jess et al. 2012a, b, c; Kuridze et al. 2012; Grant et al. 2015; Krishna Prasad et al. 2015, 2016, 2017; Keys et al. 2018). On the other hand, Fabry–Pérot spectral imaging systems such as the Crisp Imaging Spectropolarimeter (CRISP; Scharmer et al. 2008) and the Interferometric Bi-dimensional Spectrometer (IBIS; Cavallini 2006), are able to capture two-dimensional spatial information (often including spectropolarimetric Stokes I/Q/U/V measurements) across a single or multiple spectral lines. This allows a temporal comparison to be made between various spectral parameters of the same absorption line, such as the full-width at half-maximum (FWHM), intensity, Doppler velocity, and magnitudes of circular/linear polarization (providing spectropolarimetric measurements are made). As a result, harnessing multi-wavelength and/or multi-component observations provides the ability to further probe the coupling of wave activity in the lower solar atmosphere.
The upper panel of Fig. 14 displays two synthetic intensity time series generated with a cadence of 1.78 s (consistent with the HARDcam H\(\alpha \) data products overviewed in Sect. 2.1.1), each with a frequency of 5.6 mHz (\(\approx 180\) s periodicity) and a mean intensity equal to 2. However, the red lightcurve (LC2) is delayed by \(45^{\circ }\), and hence lags behind the black lightcurve (LC1) by 0.785 radians. As part of the standard procedures prior to the implementation of Fourier analysis (see, e.g., Sect. 2.2), each of the time series are detrended (in this case by subtracting a linear line of best fit) and apodized using a 90% tapered cosine apodization filter. The final intensity time series are shown in the lower panel of Fig. 14, and are now suitable for subsequent Fourier analyses.
Synthetic time series (upper panel), each with a cadence of 1.78 s, displaying a frequency of 5.6 mHz (\(\approx 180\) s periodicity) and a mean intensity equal to 2. The red lightcurve is delayed by 45 \(^{\circ }\) (0.785 radians) with respect to the black lightcurve. The lower panel displays the detrended and apodized time series, which are now suitable for subsequent FFT analyses
Following the approaches documented in Sect. 2.2.2, FFTs of the detrended and apodized time series are taken, with 95% confidence levels calculated. The resulting FFT power spectral densities are shown in Fig. 15, where the red crosses indicate frequencies where the associated power is in excess of the calculated 95% confidence levels for each respective time series. It can be seen in both the upper and lower panels of Fig. 15 that the input 5.6 mHz signal is above the 95% confidence threshold for both LC1 and LC2. Next, the cross-power spectrum, \(\varGamma _{12}(\nu )\), between the FFTs of LC1 and LC2 is calculated following the methods described by Bendat and Piersol (2000) as;
with F denoting an FFT and \({\overline{F}}\) the complex conjugate of the FFT. The cross-power spectrum is a complex array (just like the FFTs from which it is computed), and therefore has components representative of its co-spectrum (\(d(\nu )\); real part of the cross-power spectrum) and quadrature spectrum (\(c(\nu )\); imaginary part of the cross-power spectrum). The co-spectrum from the input time series LC1 and LC2 is shown in the upper panel of Fig. 16. The red cross signifies the frequency where the Fourier power exceeded the 95% confidence level in both FFTs, namely 5.6 mHz, which is consistent with the synthetic lightcurves shown in Fig. 14.
FFT power spectral densities for LC1 (upper panel) and LC2 (lower panel), corresponding to the solid black and red lines in the lower panel of Fig. 14, respectively. The red crosses highlight frequencies where the calculated Fourier power is above the 95% confidence level. It can be seen that the synthetic 5.6 mHz input signal is accurately identified in both corresponding power spectra, with its associated Fourier power being in excess of the 95% confidence threshold. The oscillatory behavior at high frequencies is due to the selected apodization filter
Co-spectrum (upper panel; real part of the cross-power spectrum) of the input time series LC1 and LC2 shown in the lower panel of Fig. 14. The lower panel displays the phase angle between the input time series LC1 and LC2, which corresponds to the phase of the complex cross-spectrum. Here, a positive phase angle indicates that LC1 leads LC2 (i.e., LC2 lags behind LC1), which can be seen visually through examination of the individual lightcurves depicted in Fig. 14. The red crosses indicate the frequency where the calculated Fourier power for LC1 and LC2 both exceed the 95% confidence levels (see Fig. 15). The horizontal dashed blue line in the lower panel highlights a phase angle of 0 \(^{\circ }\)
Finally, the co-spectrum and quadrature spectrum can be utilized to calculate the phase lag between the input lightcurves LC1 and LC2 as a function of frequency, defined by Penn et al. (2011) as,
Here, the phase angle, commonly chosen to span the interval \(-180^{\circ } \rightarrow +180^{\circ }\), is simply the phase of the complex cross-spectrum (see the nomenclature of Vaughan and Nowak 1997). The lower panel of Fig. 16 displays the calculated phase angles, again with the red cross highlighting the phase value at the frequency where the Fourier power exceeds the 95% confidence level in both FFTs corresponding to LC1 and LC2. In this example, the phase angle corresponding to a frequency of \(\approx 5.6\) mHz is equal to \(45^{\circ }\), which is consistent with the input lightcurves depicted in Fig. 14. Here, a positive phase angle indicates that LC1 leads LC2 (i.e., LC2 lags behind LC1), which can be visually confirmed in Fig. 14 with LC1 (solid black line) leading LC2 (solid red line).
It must be noted that phase angles can be computed for all possible frequencies (see, e.g., the lower panel of Fig. 16). However, it is important to determine which of these phase values are reliable before they are used in subsequent scientific interpretations. For the purposes of the example shown here, we selected that frequency at which both times series LC1 and LC2 demonstrated Fourier power exceeding the 95% confidence levels in both of their corresponding FFTs. However, a common alternative is to calculate the coherence level for each constituent frequency, which can then be employed (independently of the confidence levels) to pinpoint reliable frequencies in the corresponding cross-power spectrum. The coherence level is estimated from the normalized square of the amplitude of the complex cross-spectrum (see, e.g., Storch and Zwiers 1999), providing a measure, ranging between ‘0’ and ‘1’, of the linear correlation between the two input time series. Under this regime, values of ‘0’ and ‘1’ indicate no and perfect correlation, respectively. For the purposes of solar physics research, it is common to adopt a coherence value \(>0.80\) to signify robust and reliable phase measurements (McAteer et al. 2003; Bloomfield et al. 2004a, b; Stangalini et al. 2013b, 2018; Kontogiannis et al. 2016).
Therefore, the cross-power spectrum and coherence are both used to examine the relationship between two time series as a function of frequency. The cross spectrum identifies common large power (i.e., significant peaks) at the same frequencies in the power spectra of the two time series, and whether such frequencies are related to each other (the relationship is quantified by phase differences). Such correlations cannot, however, be revealed if one or both time series do not have significant power enhancements at particular frequencies, e.g., if the power spectra at those frequencies are indistinguishable from red noise. Nonetheless, there still may be coherent modes at such frequencies, that can be identified in the coherence spectrum, i.e., two time series can have a large coherence at a frequency even though both or one of the power spectra do not show large power at that frequency. Thus, the coherence is a measure of the degree of linear correlation between the two time series at each frequency. In solar physics, the coherence is particularly useful when the two signals are associated to, e.g., different solar atmospheric heights (with, e.g., different amplitudes) and/or two different physical parameters. An example, from real observations, where oscillatory power (at specific time-frequency locations) appears only in one of the signals is demonstrated in Fig. 25. Hence, no significant power is detected in the cross-power spectrum, whereas a large coherence level, exceeding 0.8, is identified. The significance of phase measurements for reliable coherence values can be evaluated by either introducing a coherence floor level (e.g., the 0.8 threshold mentioned above) or estimating confidence levels. To approximate a floor level, Bloomfield et al. (2004a) randomized both time series for a very large number of realizations and calculated the coherence for each, from which, the threshold was estimated as an average over all realizations plus some multiples of the standard deviation of the coherence values. For the confidence levels, the coherence values should be tested against the null hypothesis of zero population coherence, i.e., whether the coherence exceeds expected values from arbitrary colored (e.g., white or red) noise backgrounds. While various methods have been employed for this statistical test, one common approach is to estimate the confidence levels by means of Monte Carlo simulations (Torrence and Compo 1998; Grinsted et al. 2004; Björg Ólafsdóttir et al. 2016).
With reliable phase angles calculated, it then becomes possible to estimate a variety of key wave characteristics. If T is the period of the wave, then the phase lag, \(\phi \) (in degrees), can be converted into a physical time delay through the relationship,
The time delay value (arising from the measured phase lag) corresponds to a wave propagating between the different atmospheric layers. Of course, phase angles deduced from the co-spectrum and quadrature spectrum (see Eq. (7)) inherently have phase wrapping at \(\pm 180^{\circ }\), hence introducing a \(360^{\circ }\) ambiguity associated with the precise phase angle (as discussed in Centeno et al. 2006b; Beck et al. 2008; Cally 2009). Hence, the true time delay may need to include multiples of the period to account for the \(360^{\circ }\) ambiguity, hence transforming Eq. (8) into,
where n is a non-zero integer. Many studies to date have examined the propagation of relatively long-period oscillations (e.g., 100–300 s), which permit the assumption of \(n=1\) without violating theoretical considerations (e.g., sound speed restrictions; Jess et al. 2012c), hence allowing direct use of Eq. (8). However, as future studies examine higher-frequency (lower-period) wave propagation, then more careful consideration of Fourier phase wrapping will need to be taken into consideration to ensure the derived time delay is consistent with the observations. As part of a phase ‘unwrapping’ process, the identification of quasi-periodic waves and/or those with modulated amplitudes will allow phase ambiguities to be practically alleviated. For example, by tracking the commencement of a wave, and hence the time delay as it propagates between closely-spaced atmospheric layers, the phase angle can be computed without the \(\pm 360^{\circ }\) phase wrapping uncertainty. Alternatively, a modulated waveform will provide secondary peaks associated with the propagating group, which supplies additional information to better establish the precise value of n in Eq. (9), hence assisting with the phase unwrapping of the data, which will enable much more precise tracking of wave energy flux through the solar atmosphere.
Finally, if the geometric height separation, d (in km), between the two layers is known or can be estimated (González Manrique et al. 2020), then the average phase velocity, \(v_{\text {ph}}\), of the wave propagating between these two distinct layers can be deduced via,
Similar estimations of the phase velocities of embedded waves have been made by Mein (1977), Athay and White (1979), White and Athay (1979), Centeno et al. (2006b), Bello González et al. (2010a), Jess et al. (2012c), Grant et al. (2015) and Jafarzadeh et al. (2017c), to name but a few examples. Importantly, Eq. (10) can also be rearranged to estimate the atmospheric height separation between two sets of observations. For example, the acoustic sound speed is approximately constant in the lower photosphere, hence this value, alongside the derived time lag, can be utilized to provide an estimate of the height separation, d (e.g., Deubner and Fleck 1989).
2.3 Three-dimensional Fourier analysis
Telescope facilities deployed in a space-borne environment, which benefit from a lack of day/night cycles and atmospheric aberrations, have long been able to harness three-dimensional Fourier analyses to examine the temporal (\(t \leftrightarrow \omega \)) and spatial (\([x,y] \leftrightarrow [k_{x},k_{y}]\)) domains. Here, t and \(\omega \) represent the coupled time and frequency domains, respectively, while the [x, y] and \([k_{x},k_{y}]\) terms represent the coupled spatial distances and spatial wavenumbers in orthogonal spatial directions, respectively. Such three-dimensional Fourier analyses has been closely coupled with the field of helioseismology, which is employed to study the composition and structure of the solar interior by examining large-scale wave patterns on the solar surface (Kosovichev et al. 1997; Turck-Chièze 2001; Braun and Lindsey 2000; Gizon et al. 2010; Kosovichev 2011; Buldgen et al. 2019), which often give rise to patterns consistent with ‘rings’ and ‘trumpets’ when viewed in Fourier space (Hill 1988).
Up until recently, it has been challenging to apply the same three-dimensional Fourier techniques to high-resolution datasets from ground- and space-based observatories (Leighton 1963; Spruit et al. 1990). These techniques have been applied with ground-based observations to study convective phenomena (Chou et al. 1991; Straus et al. 1992) and plage (Title et al. 1992). With the advent of high image pointing stability, brought to fruition through a combination of high-order AO, photometrically accurate image reconstruction algorithms, precise telescope control hardware, and sub-pixel cross-correlation image co-alignment software, it is now possible to achieve long-duration image and/or spectral sequences that are stable enough to allow Fourier analyses in both temporal and spatial domains. The benefit of using high-resolution facilities is that they offer unprecedented Nyquist temporal frequencies (\(\omega \)) and spatial wavenumbers (\([k_{x},k_{y}]\)) due to their high temporal and spatial sampling, respectively. For example, the HARDcam H\(\alpha \) dataset described in Sect. 2.1.1 has a temporal cadence of 1.78 s and a spatial sampling of \(0{\,}.{\!\!}{''}138\) per pixel, providing a Nyquist frequency of \(\omega _{{\text {Ny}}}\approx 280\) mHz \(\left( \frac{1}{2\times 1.78}\right) \) and a Nyquist wavenumber of \(k_{{\text {Ny}}}\approx 22.8\) arcsec\(^{-1}\) \(\left( \frac{2\pi }{2\times 0.138}\right) \). This allows for the examination of the smallest and most rapidly varying phenomena currently visible in such high-resolution datasets.
Applying an FFT to a three-dimensional dataset converts the spatial/temporal signals, [x, y, t], into its frequency counterparts, [\(k_{x}, k_{y}, \omega \)]. An example of this process can be seen in Fig. 17, whereby an FFT has been applied to the HARDcam H\(\alpha \) dataset documented by Grant et al. (2018). It can be seen in the right panel of Fig. 17 that the Fourier power signatures are approximately symmetric in the \(k_{x}/k_{y}\) plane. As a result, it is common for [\(k_{x}, k_{y}\)] cross-cuts at each frequency, \(\omega \), to be azimuthally averaged providing a more straightforward two-dimensional representation of the Fourier power in the form of a \(k{-}\omega \) diagram (Duvall et al. 1988; Krijger et al. 2001; Rutten and Krijger 2003; Kneer and Bello González 2011; Jess et al. 2012c, 2017).
An example application of an FFT to a three-dimensional datacube, converting [x, y, t] (left) into its frequency counterparts [\(k_{x}, k_{y}, \omega \)] (right). The HARDcam H\(\alpha \) dataset presented here is taken from the work of Grant et al. (2018)
An azimuthally averaged \(k{-}\omega \) diagram for the HARDcam H\(\alpha \) sunspot observations described in Sect. 2.1.1 is shown in the right panel of Fig. 18. A number of important features are present in this diagram, including consistency with many quiet-Sun and internetwork Fourier power peaks documented by Krijger et al. (2001), Kneer and Bello González (2011) and Jess et al. (2012c), whereby high power observed at larger spatial wavenumbers tends to be correlated with higher temporal frequencies. This can be visualized in the right panel of Fig. 18, whereby the dominant Fourier power is associated with the smallest spatial wavenumbers and temporal frequencies. However, as the wavenumber is increased to \(>1\) arcsec\(^{-1}\), the temporal frequencies corresponding to maximal Fourier power are concentrated within the \(3{-}6\) mHz interval. This is consistent with the general trends observed in classical photospheric \(k{-}\omega \) diagrams, such as that shown in Fig. 19. Here, two \(k{-}\omega \) diagrams from the photospheric SDO/AIA 1700 Å time series that is co-spatial (and overlaps temporally) with the HARDcam H\(\alpha \) observations (used to produce Fig. 18) are displayed. The information displayed in both panels of Fig. 19 is identical, however, the left panel is displayed on a linear wavenumber (k) and frequency (\(\omega \)) scales, while the right panel is displayed on log–log axes. In both panels, similar trends (e.g., heightened Fourier power with increasing temporal frequency in the interval of 3–6 mHz is linked to larger spatial wavenumbers) can be identified, which is consistent with the overall trends depicted in the right panel of Fig. 18. However, as discussed in Jess et al. (2017), within the region highlighted by the solid black box in the right panel of Fig. 18, there is evidence of elevated Fourier power that spans a large range of spatial scales, yet remains confined to a temporal frequency on the order of 5.9 mHz (\(\approx 170\) s). This suggests that the embedded wave motion has strong coherency across a broad spectrum of spatial scales, yet can be represented by a very narrow range of temporal frequencies. Looking closely at the right panel of Fig. 18, it can be seen that elevated levels of Fourier power extend down to the smallest spatial wavenumbers allowable from the HARDcam dataset. This implies that the 5.9 mHz frequency is still significant on spatial scales much larger than the field of view captured by HARDcam.

Image reproduced with permission from Jess et al. (2017), copyright by the authors
A two-dimensional \([k_{x},k_{y}]\) cross-cut for a single temporal frequency, \(\omega \), corresponding to the HARDcam H\(\alpha \) data acquired on 2011 December 10 and described in Sect. 2.1.1 (left panel). Due to the symmetries often found between \(k_{x}\) and \(k_{y}\), it is common to perform azimuthal averaging (e.g., along the solid green contour) to collapse this two-dimensional information into a single dimension, i.e. \([k_{x}, k_{y}] \rightarrow [k]\). This allows the three-dimensional FFT cube (see, e.g., the right panel of Fig. 17) to be simplified into a standardized two-dimensional image, forming a \(k{-}\omega \) diagram (right panel). Here, the \(k{-}\omega \) diagram is cropped between approximately \(1<\omega <10\) mHz and \(0.3<k<10.0\) arcsec\(^{-1}\), and displayed on a log–log scale to assist visual clarity. The colors represent oscillatory power that is displayed on a log-scale, while the vertical dashed and dotted lines correspond to the spatial size of the umbral diameter (\(\approx 13{\,}.{\!\!}{''}50\)) and the radius of the umbra (\(\approx 6{\,}.{\!\!}{''}75\)), respectively. The solid black box indicates a region of excess wave power at \(\approx 5.9\) mHz (\(\approx 170\) s) over the entire spatial extent of the sunspot umbra.
A set of \(k{-}\omega \) diagrams, derived from the photospheric SDO/AIA 1700 Å time series of active region NOAA 11366, which is co-spatial (and overlaps temporally) with the chromospheric HARDcam measurements presented in Fig. 18. Both \(k{-}\omega \) diagrams are identical, however, the left panel is displayed on linear wavenumber (k) and frequency (\(\omega \)) scales, while the right panel is displayed on log–log axes. It is clear from inspection of the two panels that each have their merit when presenting results, with the linear axes giving less visual emphasis to the lower wavenumbers/frequencies, while the log–log axes allowing power-law trends in the power spectral densities to be modeled more easily through straight-line fitting
However, there are a number of key points related to Figs. 18 and 19 that are worth discussing. First, Fig. 19 highlights the merits of utilizing either linear or log–log axes depending on the features being examined. For example, the use of a linear scale (left panel of Fig. 19) results in less visual emphasis being placed on the lowest spatial waveneumbers and temporal frequencies. This can help prevent (visual) over-estimations of the trends present in the \(k{-}\omega \) diagram since all of the frequency bins occupy identical sizes within the corresponding figure. However, as spatial and temporal resolutions dramatically improve with next generation instrumentation, the corresponding spatial/temporal Nyquist frequencies continue to become elevated, often spanning multiple orders-of-magnitude. If these heightened Nyquist frequencies are plotted on a purely linear scale, then many of the features of interest may become visually lost within the vast interval occupied by the \(k{-}\omega \) diagram. An option available to counter this would be to crop the \(k{-}\omega \) diagram to simply display the spatial wavenumbers and temporal frequencies of interest, although this comes at the price of discarding information that may be important within the remainder of the frequency space. Alternatively, it is possible to use log–log axes for the \(k{-}\omega \) diagram, which can be visualized in the right panels of Figs. 18 and 19. This type of log–log display also benefits the fitting of any power-law trends that may be present within the \(k{-}\omega \) diagram, since they will manifest as more straightforward (to fit) linear slopes in the plot. Finally, the right panel of Fig. 18 reveals some horizontal banding of power that appears slightly different than the diagonal ‘arms’ of Fourier power visible in Fig. 19. This may be a consequence of the reduced spatial wavenumber and temporal frequency resolutions achievable with large-aperture ground-based observatories, which naturally have a reduced field-of-view size (causing a relatively low spatial wavenumber resolution when compared to large field-of-view observations from, e.g., SDO) and limited time series durations (creating relatively low temporal frequency resolutions when compared to space-borne satellite missions that are unaffected by day/night cycles and/or atmospheric effects). Therefore, it is imperative that the investigative team examines the merits of each type of \(k{-}\omega \) display and selects the use of either linear or log–log axes to best represent the physical processes at work in their dataset.
2.3.1 Three-dimensional Fourier filtering
Taking the one-dimensional Fourier filtering methodology described in Sect. 2.2.4 a step further, it is often useful to filter an input three-dimensional dataset ([x, y, t]) in terms of both its temporal frequencies, \(\omega \), and its spatial wavenumbers, k. While it is common for the frequency to be defined as the reciprocal of the period, i.e., \(\omega = 1/T\), where T is the period of oscillation, the wavenumber is often defined as \(k = 2\pi /\lambda \) (Krijger et al. 2001), where \(\lambda \) is the wavelength of the oscillation in the spatial domain (i.e., [x, y]). Hence, it is often important to bear in mind this additional factor of \(2\pi \) when translating between wavenumber, k, and spatial wavelength, \(\lambda \). Figures 18 and 20 employ this form of frequency/wavenumber notation, meaning that the spatial wavelengths can be computed as \(\lambda = 2\pi /k\), while the period is simply \(T = 1/\omega \) (similar to that shown in Straus et al. 1992; Jess et al. 2012c). However, some research programs, particularly those adopting helioseismology nomenclature, utilize the factor of \(2\pi \) in both the wavenumber and frequency domains (e.g., \(T = 2\pi /\omega \); Mihalas and Toomre 1981). As a result, it is important to select an appropriate scaling to ensure consistency across a piece of work. An example code capable of doing three-dimensional Fourier filtering is the QUEEn’s university Fourier Filtering (QUEEFF; Jess et al. 2017) algorithm, which is based around the original techniques put forward by Tarbell et al. (1988), Title et al. (1989), Rutten and Krijger (2003), Roth et al. (2010) and Krijger et al. (2001), but now adapted into a publicly available Interactive Data Language (idl; Stern 2000) package.Footnote 2\(^{,}\)Footnote 3
Outputs provided by a commonly available three-dimensional Fourier filtering code (QUEEFF; Jess et al. 2017), showing a frequency-averaged wavenumber spectrum (upper-left), a Gaussian (with \(2<k<10\) arcsec\(^{-1}\)) wavenumber filter that resembles a torus shape when viewed in the \([k_{x}, k_{y}]\) plane (upper-middle), and the resulting transmitted wavenumber spectra once multiplied by the chosen filter (upper-right). The lower panel displays the wavenumber-averaged frequency spectrum (solid black line), where the Fourier power is displayed (using a log-scale) as a function of the temporal frequency, \(\omega \). The dashed blue line highlights a chosen frequency filter, \(20\pm 10\) mHz, with a Gaussian shape to more smoothly reduce Fourier power at the edges of the chosen spectral range to reduce aliasing. The solid red line displays the resulting transmitted frequency spectrum once multiplied by the chosen Gaussian filter. In each panel, dashed black or white lines highlight the \(k_{x}/k_{y}=0\) arcsec\(^{-1}\) or \(\omega =0\) mHz locations
Importantly, the QUEEFF code provides the user with the ability to apply Gaussian smoothing windows to both frequency and wavenumber regions of interest in order to help mitigate against elements of aliasing during subsequent dataset reconstruction. Figure 20 shows an example figure provided by the QUEEFF code, which displays the frequency-averaged wavenumber power (upper-left panel), the chosen wavenumber filter (upper-middle panel) utilizing a Gaussian structure providing a torus-shaped filter spanning 2–10 arcsec\(^{-1}\), alongside the resulting filtered wavenumber spectra (upper-right panel). The lower panel of Fig. 20 displays the spatially-averaged frequency spectrum of the HARDcam H\(\alpha \) dataset, where the Fourier power is displayed as a function of the frequency, \(\omega \), using a solid black line. A Gaussian frequency filter, spanning \(20\pm 10\) mHz, is overplotted using a dashed blue line. The preserved temporal frequencies (i.e., once the original frequency spectrum has been multiplied by the chosen frequency filter) is shown using a solid red line. This filtered three-dimensional Fourier cube can then be passed through an inverse FFT to reconstruct an intensity image cube that contains the wavenumbers and frequencies of interest to the user.
Again, as discussed in Sect. 2.2.4, the QUEEFF three-dimensional Fourier filtering code constructs a Gaussian-shaped filter, which is applied in the Fourier domain. This ensures that the filter is symmetric about the chosen peak frequency (see, e.g., the black line in the left panel of Fig. 21). Of course, due to the oscillation period having a reciprocal relationship with the temporal frequency (i.e., \(1/\omega \)), this results in asymmetric sampling about the desired peak period (see, e.g., the solid black line in the right panel of Fig. 21). Depending upon the science requirements of the user, it may be more advantageous to apply a Gaussian-shaped filter in the period domain (e.g., the solid blue line in the right panel of Fig. 21), which ensures less inclusion of lower frequency (higher period) terms that may be undesirable in the final reconstructed time series. This is highlighted by the more rapid truncation of the filter (solid blue line in the left panel of Fig. 21) towards lower frequencies. Additionally, the user may select alternative frequency filters, such as a Voigt profile (Zaghloul 2007), which is shown in Fig. 21 using a solid red line. Furthermore, Fig. 21 shows possible filtering combinations that can be applied to the temporal domain, yet similar options are available when filtering the spatial wavenumbers (\([k_{x}, k_{y}]\)) too. Ultimately, it is the science objectives that drive forward the wave filtering protocols, so possible options need to be carefully considered before applying to the input data.
Different types of frequency (\(\omega \)) filter that can be applied to time-resolved data products. The left panel displays the filter transmission (as a percentage) in terms of the frequency, while the right panel displays the same filters as a function of the oscillatory period. Presented using a solid black line is a Gaussian-shaped filter in the frequency domain with a FWHM equal to 10 mHz, while the solid red line indicates a Voigt-shaped filter in the frequency domain, both centered on 20 mHz. Contrarily, a Gaussian-shaped filter in the period domain, with a FWHM equal to 10 s, is shown using a solid blue line, again centered on 50 s to remain consistent with the 20 mHz profiles shown using red and black lines. It is clearly evident that the filter profile shape changes dramatically between the time and frequency domains, and hence it is important to select the correct filter based upon the science requirements
Combination Fourier filters (i.e., that are functions of \(k_{x}\), \(k_{y}\) and \(\omega \)) have been utilized in previous studies to extract unique types of wave modes manifesting in the lower solar atmosphere. For example, specific Fourier filters may be employed to extract signatures of f- and p-mode oscillations manifesting in photospheric observations (e.g., Hill 1988; Schou et al. 1998; Gizon and Birch 2004; Bahauddin and Rast 2021). Another example of a well-used Fourier filter is the ‘sub-sonic filter’, which can be visualized as a cone in \(k{-}\omega \) space (Title et al. 1989),
where \(v_{\text{ph}}\) is the phase velocity of the wave. Here, all Fourier components inside the cone, where propagation velocities are less than the typical sound speed (i.e., \(v_{\text{ph}} < c_{s}\)), are retained while velocities outside the cone are set to zero. An inverse Fourier transform of this filtered spectrum provides a dataset that is embodied by the convective part of the solar signal since the non-convective phenomena (e.g., solar p-modes) have been removed (Straus and Bonaccini 1997; Rutten and Krijger 2003). Alternatively, modification of the sub-sonic filter to include only those frequencies above the Lamb mode, \(\omega = c_{s}k\) (Fleck et al. 2021), provides a reconstructed dataset containing oscillatory parts of the input signal. As highlighted above, it is the science objectives that define the filtering sequences required to extract the underlying time series of interest. However, well-proven examples of these exist for common phenomena (e.g., solar f- and p-modes), hence providing an excellent starting point for the community.
2.4 Wavelet analyses
While FFT analyses is very useful for identifying and characterizing persistent wave motion present in observational datasets, it begins to encounter difficulties when the time series contains weak signals and/or quasi-periodic signatures. Figure 22 shows example time series containing a persistent wave signal with a 180 s periodicity (5.56 mHz) and no embedded noise (top-left panel), a quasi-periodic 5.56 mHz wave signal with no noise (middle-left panel), and a qausi-periodic 5.56 mHz wave signal embedded in high-amplitude noise (lower-left panel). It can be seen for each of the corresponding right-hand panels, which reveal the respective Fourier power spectral densities, that the detected 5.56 mHz Fourier peak becomes progressively less apparent and swamped by noise, even becoming significantly broadened in the lower-right panel of Fig. 22. As a result, the application of Fourier analyses to solar time series often displaying quasi-periodic wave motion (e.g., spicules, fibrils, rapid blueshift excursions (RBEs), etc.; Beckers 1968; De Pontieu et al. 2004, 2007a, b; Zaqarashvili and Erdélyi 2009; Sekse et al. 2013a, b; Kuridze et al. 2015) may not be the most appropriate as a result of the limited lifetimes associated with these features.
An example time series consisting of a pure 180 s periodicity (5.56 mHz) signal, which is sampled at a cadence of 1.44 s to remain consistent with modern instrument capabilities (upper left). The middle-left panel shows the same example time series, only now with the first three and last two complete wave cycles suppressed, hence making a quasi-periodic wave signal. The lower-left panel shows the same quasi-period wave signal shown in the middle-left panel (solid green line), only now with superimposed Poisson (shot) noise added on top of the signal. Each of the right panels display the corresponding FFT-generated Fourier spectra, with the frequency and Fourier power values plotted on log-scales for better visual clarity. The vertical dashed red lines highlight the input 5.56 mHz signal
Wavelet techniques, pioneered by Torrence and Compo (1998), employ a time-localized oscillatory function that is continuous in both time and frequency (Bloomfield et al. 2004b), which allows them to be applied in the search for dynamic transient oscillations. The time resolution of the input dataset is preserved through the modulation of a simple sinusoid (synonymous with standard FFT approaches) with a Gaussian envelope, providing the Morlet wavelet commonly used in studies of waves in the solar atmosphere (Bloomfield et al. 2004a; Jess et al. 2007; Stangalini et al. 2012; Kobanov et al. 2013, 2015; Jafarzadeh et al. 2017d). As a result, a wavelet transform is able to provide high frequency resolution at low frequencies and high time resolution at high frequencies, which is summarized by Kehtarnavaz (2008).
Figure 23 displays the wavelet power spectrum (lower panel) resulting from the application of a Morlet wavelet transform on the detrended and apodized HARDcam H\(\alpha \) lightcurve (upper panel). Here, it is possible to see the effects of quasi-periodic wave phenomena, where there is clear evidence of a large-amplitude periodicity between times of \(0{-}2200\) s at a period of \(\approx 210\) s (\(\approx 4.7\) mHz). This wave activity is highlighted in the wavelet transform by being bounded by the 95% confidence level isocontours across these times and periods, which is equivalent to the oscillatory behavior being significant at the 5% level (Torrence and Compo 1998). To calculate the wavelet power thresholds corresponding to the 95% confidence isocontours, the wavelet background spectrum (i.e., the output theoretical background spectrum that has been smoothed by the wavelet function) is multiplied by the 95\(^{\text {th}}\) percentile value for a \(\chi _{2}^{2}\) distribution (Gilman et al. 1963). Please note that for some considerations, including expensive computation times, the Monte Carlo randomization method is not preferred for wavelet transform (Lau and Weng 1995; Torrence and Compo 1998). The \(\approx 210\) s wavelet power signatures shown in the lower panel of Fig. 23 are consistent with the standardized FFT approach documented in Sect. 2.2, although the quasi-periodic nature of the wave motion is likely a reason why the corresponding power in the traditional FFT spectrum (upper panel of Fig. 8) is not as apparent. Importantly, with the wavelet transform it is possible to identify more clearly the times when this periodicity appears and disappears from the time series, which is seen to correlate visibly with the clear sinusoidal fluctuations present at the start of the H\(\alpha \) time series (upper panel of Fig. 23). Also, the lack of significant wavelet power at very long periods (low frequencies) suggests that the lightcurve detrending applied is working adequately.
The detrended and apodized HARDcam H\(\alpha \) lightcurve shown in the lower panel of Fig. 6 (top). The bottom panel shows the corresponding wavelet transform, where the wave power is displayed as a function of the oscillatory period (y-axis) and observing time (x-axis). The color bar displays the normalized wavelet power, while the cross-hatched region (bounded by a dashed white line) highlights locations of the wavelet transform that may be untrustworthy due to edge effects. Solid white lines contour regions where the wavelet power exceeds the 95% confidence level (i.e., significant at the 5% level)
Due to wavelet analyses preserving the time domain of the original input signal, care must be taken to ensure that any power visible in wavelet transforms is the result of wave motion and not an instantaneous spike in intensity. To achieve this, it is typical to exclude oscillations from subsequent analysis that last, in duration, for less than \(\sqrt{2}\) wave cycles. This requirement is often referred to as the decorrelation time (Torrence and Compo 1998), which involves comparing the width of a peak in the wavelet power spectrum (defined as the time interval over which the wavelet power exceeds the 95% confidence level—see Sect. 2.2.2) with the period itself to determine the number of complete wave cycles (Ireland et al. 1999; McAteer et al. 2004). Oscillations that last for less time than \(\sqrt{2}\) wave cycles are subsequently discarded as they may be the result of spikes and/or instrumental abnormalities in the data. In addition, periodicities manifesting towards the extreme edges of the lightcurve need to be considered carefully due to the possible presence of edge effects arising due to the finite duration of the time series (Meyers et al. 1993). This region where caution is required is highlighted in the lower panel of Fig. 23 using the cross-hatched solid white lines. Here, the “cone of influence” (COI) is defined as as the e-folding time for the autocorrelation of wavelet power at each scale, and for the traditional Morlet wavelet this is equal to \(\sqrt{2}\) wave cycles (Torrence and Compo 1998), hence why longer periods are more heavily effected (in the time domain) than their shorter (high-frequency) counterparts.
Finally, many research studies employ the global wavelet spectrum to characterize the frequencies present in the input time series. Here, the global wavelet spectrum is defined as the average spectrum across all local wavelet spectra along the entire input time axis (Torrence and Compo 1998). Essentially, the global wavelet spectrum can be considered as an estimation of the true Fourier spectrum. For example, a time series comprised of mixed wave frequencies that are superimposed on top of a white noise background should produce Fourier spectral peaks equal to \(2\sigma _{\epsilon }^{2} + NA_{i}^{2}/2\), where \(A_{i}\) are the amplitudes of the oscillatory components, \(\sigma _{\epsilon }^{2}\) is the variance of the noise, and N is the number of steps in the time series (Priestley 1981). However, the corresponding peaks in the global wavelet spectrum will usually be higher at larger scales when compared to smaller scales, which is a consequence of the wavelet transform having better frequency resolution at long periods, albeit with worse time localization (Marković and Koch 2005).
As such, the global wavelet spectrum is often considered a biased estimation of the true Fourier spectrum (Wu and Liu 2005). This effect can be clearly seen in Fig. 24, which displays both the Fourier and global wavelet power spectra for the same HARDcam H\(\alpha \) time series shown in the lower panel of Fig. 6. In Fig. 24, the higher power at larger scales (lower frequencies) is visible in the global wavelet spectrum (red line), when compared to that derived through traditional Fourier techniques (black line). However, at smaller scales (higher frequencies), both the global wavelet and Fourier spectra are in close agreement with one another, with the global wavelet spectrum appearing as a smoothed Fourier spectrum. The reason for these effects is due to the width of the wavelet filter in Fourier space. At large scales (low frequencies), the wavelet is narrower in frequency, resulting in sharper peaks that have inherently larger amplitudes. Contrarily, at small scales (high frequencies), the wavelet is more broad in frequency, hence causing any peaks in the spectrum to become smoothed (Torrence and Compo 1998). As such, it is important to take such biases into consideration when interpreting any embedded wave motion. Indeed, Banerjee et al. (2001), Christopoulou et al. (2003), Samanta et al. (2016), Kayshap et al. (2018) and Chae et al. (2019) have discussed the implementation of global wavelet and Fourier power spectra in the context of solar oscillations.
Fourier (black line) and global wavelet (red line) power spectra of the HARDcam H\(\alpha \) detrended lightcurve shown in the lower panel of Fig. 6. It can be seen that at larger scales (lower frequencies) the global wavelet spectrum has increased power over that calculated from traditional Fourier techniques, due to the increased wavelet frequency resolution in this regime. Contrarily, at smaller scales (higher frequencies) the global wavelet spectrum appears as a smoothed Fourier spectrum due to the reduced frequency resolution at these smaller scales. While the global wavelet spectrum is a good estimation of the Fourier power spectrum, these biases need to be carefully considered when interpreting the embedded wave motion
2.4.1 Wavelet phase measurements
Similar to the Fourier phase lag analysis described in Sect. 2.2.5, it is also useful to obtain phase angles, cross-power spectrum, and coherence between wavelet power spectra at different wavelengths, spatial locations, and/or multi-component spectral measurements. Hence, the phase angles are determined not only as a function of frequency, but also as a function of time. These phase angles are usually demonstrated as small arrows on a wavelet co-spectrum (or wavelet coherence) map, where their directions indicate the phase angles at different time-frequency locations. The convention with which an arrow direction represents, e.g., zero and \(90^{\circ }\) phase angles (and which lightcurve leads or lags behind) should be specified.
Reproduced from Jafarzadeh et al. (2017d), the lower- and upper-left panels of Fig. 25 display two wavelet power spectra (from a Morlet wavelet transform) of transverse oscillations in a small magnetic element (marked with circles in Fig. 5) at two atmospheric heights sampled by the SuFI/Sunrise 300 nm and Ca ii H bands (with an average height difference of \(\approx 450\) km), respectively. Islands of high power, particularly those marked by the 95% confidence level contours, are evident in both wavelet power spectra. The wavelet co-spectrum and coherence maps of these two power spectra are shown in the upper- and lower-right panels of Fig. 25, respectively. The phase-lag arrows are over plotted on the entire cross-power spectrum, while the same arrows are depicted on the latter map only where the coherence exceeds 0.7. Here, the arrows pointing right represent in-phase oscillations and those pointing straight up identify 90 degrees phase lags where the oscillations in 300 nm lag behind those observed in the Ca ii H time series. Note here the changes of phase lags from one time-frequency region to another, particularly, in regions with confidence levels larger than 95%, and/or areas with coherence exceeding 0.7 (or 0.8). However, most of the arrows point upwards (with different angles) in this example, implying an upward wave propagation in the lower solar atmosphere (i.e., from the low photosphere, sampled by the 300 nm band, to the heights corresponding to the temperature minimum/low chromosphere, sampled by the Ca ii H images). A slight downward propagation is also observed in a small area. These may associate to various wave modes and/or oppositely propagating waves at different frequencies and times. We note that such phase changes with time could not be identified using a Fourier phase lag analysis (see Sect. 2.2.5), where phase angles are computed as a function of frequency only.

Images reproduced with permission from Jafarzadeh et al. (2017d), copyright by AAS
Wavelet power spectra of transverse oscillations in a small magnetic element (marked with circles in Fig. 5), from time-series of images acquired in 300 nm (lower left) and in Ca ii H (upper left) bands from SuFI/Sunrise. The right panels display the wavelet co-spectrum power (on the top) and coherence map (on the bottom). The 95% confidence levels are identified with dashed/solid contours in all panels and the COIs are marked with the cross-hatched/shaded regions. The arrows on the right panels show the phase angles between oscillations at the two atmospheric heights, with in-phase oscillations depicted by arrows pointing right and fluctuations in Ca ii H leading those in 300 nm by \(90^{\circ }\) marked by arrows pointing straight up.
Whether the cross-power spectrum or coherence should be used for the wave identification greatly depends on the science and the types of data employed. While the co-spectrum (which is obtained through multiplying the wavelet power spectrum of a time series by the complex conjugate of the other) identifies regions with large power in common (between the two time series), the coherence (i.e., square of the cross-spectrum normalized by the individual power spectra; Grinsted et al. 2004) highlights areas where the two time series co-move, but not necessarily sharing a high common power. An example is the area around the time and period of 70 and 47 s, respectively, that is associated to a coherence level exceeding 0.8 (and within the 95% confidence levels), but with no significant power in the co-spectrum (only one of the power spectra, i.e., that from the Ca ii H data, show large power at that time and period location).
As a working example, from the right panels of Fig. 25, the phase lag at the time and period of 75 and 41 s, respectively, reads about 140 \(^{\circ }\), which is translated to a time lag of \(\approx 16\) s. Given the average height difference of 450 km between the two atmospheric layers, it results in a wave propagation speed of \(\approx 28\) km/s (due to the transverse oscillations in the small-scale magnetic element). A similar analysis for intensity oscillations in the same small-scale magnetic element has also been presented in Jafarzadeh et al. (2017d). Of course, as highlighted in Sect. 2.2.5, phase measurements are always subject to an associated uncertainty of \(\pm 360 ^{\circ }\) (\(\pm 2\pi \)), which arises via phase wrapping. As a consequence, to alleviate ambiguities in phase angles, in addition to subsequently derived phase velocities, care must be taken to select observational time series where the atmospheric height separation is not too substantial (see Sect. 2.2.5 for more discussion), which helps to minimize the ambiguities associated with phase wrapping.
Depending on science objectives, it may be helpful to inspect the variation of phase lags with frequency (or period). To this end, a statistical phase diagram can be created, where all reliable phase angles (e.g., those associated to power significant at 5%, and/or with a coherence exceeding 0.8) are plotted as a function of frequency (Jess et al. 2012c). Such a phase diagram can provide information about the overall wave propagation in, e.g., similar magnetic structures. Figure 26 illustrates a phase diagram (i.e., a 2D histogram of phase angle versus period; from Jafarzadeh et al. 2017d) constructed from all the reliable phase angles obtained from the transverse oscillations in 7 small magnetic elements, similar to that discussed above. The background colors represent the occurrence frequency and the contours mark regions which are statistically significant (i.e., compared to the extreme outliers). From this phase diagram, it is evident that the upward propagating waves (i.e., the positive phase angles in the convention introduced here) appear preferential.

Image reproduced with permission from Jafarzadeh et al. (2017d), copyright by AAS
Phase diagram of transverse oscillations in 7 small magnetic elements observed in two layers of the lower solar atmosphere (with \(\approx 450\) km height difference) from SuFI/Sunrise.
2.5 Empirical mode decomposition
Empirical Mode Decomposition (EMD; Huang et al. 1998, 1999) is a statistical tool developed to decompose an input time series into a set of intrinsic timescales. Importantly, EMD is a contrasting approach to traditional FFT/wavelet analyses since it relies on an empirical approach rather than strict theoretical tools to decompose the input data. Due to the decomposition being based on the local characteristic timescales of the data, it may be applied to non-linear and non-stationary processes without the detrending often applied before the application of Fourier-based techniques (i.e., under the assumption that such detrending is able to accurately characterize any non-stationary and/or non-periodic fluctuations in the time series with a low-order polynomial). As such, it is possible for EMD to overcome some of the limitations of FFT/wavelet analyses, including aspects of wave energy leakage across multiple harmonic frequencies (Terradas et al. 2004). non-stationary/non-period fluctuations that can be characterized by a low-order polynomial
Following the methodology described by Terradas et al. (2004), we apply EMD techniques to the HARDcam H\(\alpha \) time series depicted in the upper-left panel of Fig. 6. To begin, extrema in the lightcurve are identified, and are then connected by a cubic spline fit to provide an upper envelope of the positive intensity fluctuations (i.e., fluctuations above the mean). Next, the same process is applied to find the lower envelope corresponding to negative intensity fluctuations (i.e., fluctuations below the mean). The mean value between the upper and lower envelopes, at each time step, is denoted \(m_{1}(t)\). The difference between the original input data and the mean function is called the first component, \(h_{1}(t)\). Providing the input time series contains no undershoots, overshoots, and/or riding waves (Huang et al. 1998), then the first intrinsic mode function (IMF) is equal to \(h_{1}(t)\).
Unfortunately, many input time series contain signal blemishes, and removal of the first component, \(h_{1}(t)\), from the original lightcurve will generate additional (false) extrema. Hence, to mitigate against these potential issues, the above procedure is repeated numerous times until the first true IMF is constructed (see Huang et al. 1998, for more information). The first IMF constructed, \(c_{1}(t)\), is comprised of the most rapid fluctuations of the signal. This can then be subtracted from the original time series, producing a residual lightcurve made up of longer duration fluctuations. The process can subsequently be repeated numerous times to extract additional IMFs until the amplitude of the residual lightcurve falls below a predetermined value, or becomes a function from which no more IMFs can be extracted (Terradas et al. 2004).
Figure 27 shows a collection of IMFs extracted from the HARDcam H\(\alpha \) time series depicted in the upper-left panel of Fig. 6. It is clear that the most rapid fluctuations are present in IMF \(c_{1}\), with IMF \(c_{8}\) documenting the slowest evolving intensity variations. Plotted on top of IMF \(c_{8}\) is the original H\(\alpha \) time series, along with the polynomial best-fit line (dashed red line) used to detrend the lightcurve in Sect. 2.2 before the application of FFT/wavelet techniques. The global trends highlighted by IMF \(c_{8}\) and the polynomial best-fit line are similar, again highlighting the appropriate use of detrending in Sect. 2.2, but now compared with generalized empirical methods. Figure 28 displays the 8 extracted IMFs in the form of a two-dimensional map, which can often be used to more readily display the corresponding interplay between the various amplitudes and variability timescales.
IMFs \(c_{1} \rightarrow c_{8}\), extracted from the original (non-detrended and non-apodized) HARDcam H\(\alpha \) time series overplotted in the lower-right panel. In addition, the lower-right panel also shows the polynomial best-fit line (dashed red line) used to detrend the data prior to FFT/wavelet analyses. It can be seen that the longest period fluctuations making up IMF \(c_{8}\) are similar to the global trend line calculated in Sect. 2.2.Note that a summation of IMFs \(c_{1} \rightarrow c_{8}\) will return the original signal
IMFs \(c_{1} \rightarrow c_{8}\) displayed as a two-dimensional map (left), where yellow and blue colors represent the peaks and troughs, respectively, of the IMF intensity fluctuations. The horizontal dashed black lines represent cuts through each IMF, with the corresponding intensity time series displayed in the right panel. The horizontal dashed red lines represent the zero value corresponding to each IMF
Once the IMFs have been extracted from the input time series, it is possible to employ Hilbert spectral analysis (Huang et al. 1998; Oppenheim and Schafer 2009) to examine the instantaneous frequencies with time for each IMF. The combined application of EMD and Hilbert spectral analysis is often referred to as the Hilbert–Huang transformation (Huang and Wu 2008). From the outputs of the Hilbert–Huang transformation, it is possible to display the instantaneous frequencies for each of the extracted IMFs as a function of time. The left panel of Fig. 29 displays the instantaneous frequencies corresponding to IMFs \(c_{2} \rightarrow c_{7}\) using the purple, blue, dark green, light green, orange, and red lines, respectively. IMFs \(c_{1}\) and \(c_{8}\) have been removed from the plot as these correspond to very high and low frequency fluctuations, respectively, which clutter the figure if included. The solid colored lines represent the running mean values of the instantaneous frequencies (calculated over a 30 s window), while the vertical colored error bars indicate the standard deviations of the frequency fluctuations found within the running mean sampling timescale. As already shown in Figs. 27 and 28, the frequencies associated with higher IMFs are naturally lower as a result of the residual time series containing less rapid fluctuations. It can be seen the in left panel of Fig. 29 that IMF \(c_{2}\) contains frequencies in the range of 50–300 mHz (3–20 s), while IMF \(c_{7}\) displays lower frequencies spanning 1–30 mHz (33–1000 s). We must note that the left panel of Fig. 29 is simply a representation of the instantaneous frequencies present in the time series as a function of time and does not contain information related to their relative importance (e.g., their respective amplitudes), although this information is indeed present in the overall Hilbert–Huang transform.
Instantaneous frequencies computed from applying a Hilbert–Huang transform to the HARDcam H\(\alpha \) lightcurve shown in the lower-right panel of Fig. 27 and displayed as a function of time (left panel). The solid purple, blue, dark green, light green, orange, and red lines correspond to moving average frequencies (computed over a 30 s window) for the IMFs \({c_{2} \rightarrow c_{7}}\), respectively. The vertical error bars correspond to the standard deviations of frequencies found within the 30 s moving average windows. The right panel displays the corresponding Hilbert–Huang power spectrum, calculated by integrating the instantaneous frequency spectra over time and normalized to the largest power value computed, hence providing a plot of relative changes in spectral energy as a function of frequency. Features within the power spectrum are consistent with the FFT and wavelet outputs shown in Figs. 12 and 23
Finally, it is possible to integrate the instantaneous frequency spectra (including their relative amplitudes) across time, producing the Hilbert–Huang power spectrum shown in the right panel of Fig. 29. The features of the Hilbert–Huang power spectrum are very similar to those depicted in the FFT spectrum shown in the right panel of Fig. 12. Notably, there is a pronounced power enhancement at \(\approx 4.7\) mHz, which is consistent with both the FFT power peaks (right panel of Fig. 12) and the heightened wave amplitudes found at \(\approx 210\) s in the wavelet transform shown in the bottom panel of Fig. 23. This shows the consistency between FFT, wavelet, and EMD approaches, especially when visible wave activity is evident.
2.6 Proper orthogonal decomposition and dynamic mode decomposition
Recently, for the first time, Albidah et al. (2021, 2022) applied the methods of Proper Orthogonal Decomposition (POD; see e.g., Pearson 1901; Lumley 1967) and Dynamic Mode Decomposition (DMD; see e.g., Schmid 2010) to identify MHD wave modes in a sunspot umbrae. The POD method defines the eigenfunctions to be orthogonal in space but places no constraints on their temporal behaviour. On the other hand, DMD puts no constraints on the spatial structure of the eigenfunctions but defines them to be orthogonal in time, meaning that each DMD mode has a distinct frequency. Hence, POD modes are permitted to have broadband frequency spectra but DMD modes are not. This is shown in the right panel of Fig. 30, which shows a broadband power spectral density (PSD) of 8 POD modes detected in a sunspot umbra by Albidah et al. (2021) using HARDcam H\(\alpha \) intensity observations from Jess et al. (2017).
The left panel shows a sunspot from Jess et al. (2017) in H\(\alpha \) intensity using data from HARDcam (one pixel has a width of 0.138”). The middle panel shows the mean intensity of the time series, the colourbar displays the magnitude of the mean time series, the solid black line shows umbra/penumbra boundary (intensity threshold level 0.85) and the green box (101 \(\times \) 101 pixels) shows the region where Albidah et al. (2021) applied POD and DMD. The right panel displays the PSD of the time coefficients of the first 20 POD modes (in log scale). The PSD shows peaks between frequencies 4.3–6.5 mHz (corresponding to periods of 153–232 s). Image adapted from Albidah et al. (2021)
Both the POD and DMD produce 2D eigenfunctions as shown in the left and middle columns of Fig. 31, however, they achieve this using different approaches. Essentially, DMD identifies the spatial modes which best fit a constant sinusoidal behavior in time, as with a Fourier transform. POD ranks the spatial modes in order of contribution to the total variance, which DMD cannot do.
Since POD can produce as many modes as there are time snapshots, the challenge is to identify which modes are physical and which are not. Similarly, not all DMD modes may be physical. For practical purposes a physical model, such as the magnetic cylinder model (see Sect. 2.9.2 for discussion of MHD wave modes of a magnetic cylinder) can be used to select POD and DMD modes which most closely correspond to predicted MHD wave modes. For the approximately circular sunspot shown in Fig. 31, the predicted MHD cylinder modes which are in the strongest agreement with the selected POD and DMD modes are shown in the right column. These are the fundamental slow body sausage (top row) and kink modes (bottom row).
The top and bottom rows show snapshots of the slow body sausage and kink modes, respectively. From left to right, the columns show the POD and DMD modes from HARDcam H\(\alpha \) intensity observations of a sunspot (Jess et al. 2017), then the corresponding magnetic cylinder model modes. As shown in the color bar, the intensity oscillations are normalized between \(-1\) and 1, hence the blue and red regions are in anti-phase. The methods of POD and DMD provide a most promising approach to decompose MHD wave modes in pores and sunspots, even if their cross-sectional shapes are much more irregular than this example. Image adapted from Albidah et al. (2021)
In the case of the magnetic cylinder model, assuming a static background, the eigenmodes, e.g., kink, sausage and fluting, are orthogonal to each in other in space by definition. Furthermore, each mode can have a broadband signal in \(\omega \) and k as shown for a real sunspot in the right panel of Fig. 30. Hence, POD can identify such modes in pores and sunspots, providing there is no significant background flow that will break the condition of orthogonality. Furthermore, if a mode has a dominant power frequency, this can be identified with DMD as well as POD. Indeed, this was done by Albidah et al. (2021) for the 8 POD modes shown in the PSD plot in the right panel of Fig. 31 which have distinct power peaks between 4.3 and 6.5 mHz. In such cases a combined POD/DMD approach is a most promising avenue for identifying physical modes. However, it must be highlighted, as initially introduced in Sect. 2.2, that the characterization of waves using POD and DMD techniques must be treated with the same caution as traditional FFT approaches. For example, it is essential that the relative amplitudes of each eigenfunction are compared to noise and/or background sources to establish its true significance.
As will be shown in Sect. 3.3, POD and DMD methods are especially useful for decomposing MHD wave modes in pores and sunspots of more irregular cross-sectional shapes than the example shown in Fig. 31. This is because POD and DMD do not have the limitation of having their eigenfunctions pre-defined as they are with Fourier decomposition, where the basis functions are simply fixed as sinusoids. Even in the standard cylinder model, the eigenfunctions in the radial direction are Bessel functions not sinusoids. Hence, when it comes to identifying the spatial structure of individual MHD wave modes in pores and sunspots, the methods of POD and DMD are more suited to the job than Fourier decomposition. However, Fourier transforming the time coefficient of a POD mode is still necessary to calculate its PSD as shown in the right panel of Fig. 30.
2.7 \(B{-}\omega \) diagrams
Imaging spectropolarimetry offers the additional possibility to study the variations in the wave power spectrum as a function of magnetic flux. To this aim, Stangalini et al. (2021b) have proposed a new visualization technique, called a \(B{-}\omega \) diagram (see Fig. 32), which combines the power spectrum of a particular quantity (e.g., Doppler velocities) with its corresponding magnetic information. In this diagram, each column represents the average power spectrum of pixels within a particular magnetic field strength interval as inferred from polarimetry (e.g., via spectropolarimetric inversions or center-of-gravity methods; Rees and Semel 1979). The \(B{-}\omega \) diagram therefore has the capability to help visualize changes in the oscillatory field as one transitions from quiet Sun pixels outside the magnetic tube to the inner (more concentrated) magnetic region. In Fig. 32 we show an example of \(B{-}\omega \) diagram taken from Stangalini et al. (2021b), which reveals unique wave information for a magnetic pore observed by IBIS in the photospheric Fe i 6173 Å spectral line. Here, we clearly see that the the amplitude of 5-min (\(\approx 3\) mHz) oscillations in the quiet Sun is progressively reduced as one approaches the boundary of the magnetic pore (increasing B values). On the other hand, immediately inside the boundary of the pore (highlighted using a dashed vertical line), a set of spectral features is observed in both Doppler velocity and CP (circular polarization) oscillations (i.e., magnetic field oscillations), which are interpreted as specific eigenmodes of observed magnetic cylinder.

Image reproduced with permission from Stangalini et al. (2021b)
Doppler velocity (left) and circular polarization (CP; center) \(B{-}\omega \) diagrams of a magnetic pore observed in the photospheric Fe i 6173 Å spectral line. The vertical blue dashed lines represents the boundary of the umbral region as inferred from intensity images. The right panel shows the average spectra outside and inside the magnetic umbra of the pore. The 5-min (p-mode) oscillations dominate the quiet Sun, but their amplitude is progressively reduced absorbed as one approaches the concentrated magnetic fields of the pore, until a series of eigenmodes are excited within the magnetic tube itself.
2.8 Effects of spatial resolution
The solar atmosphere is highly structured, presenting features across a wide range of spatial scales down to the resolution limit of current instrumentation. Oscillations can be localized at particular spatial scales/features (see, e.g., the discussion in Sect. 3.3). This means that, for instance, the Doppler velocity, or indeed any other diagnostic, is the average within the resolution angle of the observations. For this reason, the signal itself and its inherent temporal oscillations associated with features below (or close to) the resolution limit can be underestimated (MacBride et al. 2021).
To illustrate this effect, we consider a case study based on CRISP observations acquired at the SST of a quiet Sun region, which were previously deconvolved using the MOMFBD code (van Noort et al. 2005) to reduce the effects of residual image aberrations. Here, for the seek of simplicity we consider the starting data as “perfect data” for the only purpose of illustrating the effects of spatial resolution of the final power spectra of the oscillations. In the left panel of Fig. 33 we show the original instantaneous Doppler velocity field obtained from the Fe i 6301.5 Å photospheric spectral line. In order to mimic the effect of a lower spatial resolution, we convolve this data using a point spread function (PSF), assumed here to be Gaussian, with a larger full-width at half-maximum (FWHM). In order to simplify the process, we do not consider the effects of residual seeing aberrations present in the original and convolved images. Therefore, our PSF model only considers the effect of the instrumental PSF, which can be represented by the Houston diffraction limited criterion (Houston 1927),
where \(\lambda \) is the observed wavelength and D is the diameter of the telescope. Local seeing effects in ground-based observations can further reduce the effective resolution, in addition to the seeing conditions themselves varying significantly throughout the observations, thus providing further (time varying) degradation to the data. In the left panel of Fig. 33, the photospheric velocity field is the result of two components: downflows in the intergranular lanes (red colors) and upflows in the granules (blue colors). Since the integranular lanes are much smaller and narrower with respect to the granules, the velocity signals associated with the integranular regions become more affected (i.e., reduced) by the lower spatial resolution induced by worsening seeing conditions. This effect is apparent in the middle and right panels of Fig. 33, where the progressively worsening seeing conditions (\({\text{FWHM}}=0.2''\) middle panel; \({\text{FWHM}}=0.5''\) right panel) result in lost fine-scale velocity information.
Estimated effects of the spatial resolution (i.e., different FWHMs of the instrumental PSF; see Eq. (12)) on the observed Doppler velocity field. The original Doppler velocity field observed by the CRISP instrument at the SST in the Fe i 6301.5 Å photospheric spectral line (left panel) is convolved with a Gaussian PSF with larger and larger FWHMs to mimic the effects of a lower spatial resolution (middle and right panels). The sign convention employed shows downflows (positive velocities) as red colors and upflows (negative velocities) as blue colors. It can be seen in the middle (\({\text{FWHM}}=0.2''\)) and right (\({\text{FWHM}}=0.5''\)) panels that progressively worsening seeing conditions results in lost velocity signals from primarily small-scale features (e.g., intergranular lanes)
If the resolution angle is smaller than the angular size of the feature being studied, then the measured signal will approach the true value. This is due to the ‘filling factor’ being equal to ‘1’, whereby the feature of interest occupies the entirety of the resolution element on the detector. On the contrary, if the resolution element is larger than a particular spatial feature, then the signal measured will be a combination of both the feature of interest and plasma in its immediate vicinity. Here, the filling factor of the desired structure is \(<1\), resulting in a blended signal that produces the measured parameters. In the specific case of integranular lanes (see, e.g., Fig. 33), this means that if the resolution element is larger than their characteristic width, signal from the neighboring granules will be collected too. This effect is shown in Fig. 34, where the probability density functions (PDFs) of the instantaneous velocities for different spatial resolutions is shown. By lowering the spatial resolution, the original skewed distribution of the velocity, which is a consequence of the different spatial scales associated with the upflows (blueshifts) and downflows (redshifts), is transitioned into a more symmetric distribution that is characterized by smaller velocity amplitudes.
Probability density functions (PDFs) of the instantaneous velocity fields shown in Fig. 33 as a function of spatial resolution. Here, the blue, orange, and green lines represent the PDFs for three different seeing conditions represented by a \({\text{FWHM}}=0.16''\), \({\text{FWHM}}=0.2''\), and \({\text{FWHM}}=0.5''\), respectively. It can be seen that worse seeing conditions (e.g., the green line) produce more symmetric distributions and smaller velocity amplitudes due to the spatial averaging of the measured signals
These effects, in turn, also translate into a reduction of the measured amplitudes of any oscillations present in the data. This effect can be seen in Fig. 35, where the suppression factor of the Doppler velocity amplitudes (upper panel) and the resulting power spectral densities in two distinct frequency bands, namely 3 mHz and 5 mHz (1 mHz bandwidth; lower panel), are shown as a function of the spatial resolution. The suppression factor gives an idea of the underestimation of the amplitudes of the embedded oscillations, and in the top panel of Fig. 35 it is normalized to the value associated with the original SST/CRISP data used here (i.e., \({\text{FWHM}}=0.16''\) provides a suppression factor equal to 1.0). From the upper panel of Fig. 35 we can also predict the amplitudes of the velocity oscillations captured in forthcoming observations from the new 4 m DKIST facility, which could be as large as 1.3–1.4 times that of the velocity amplitudes measured with a 1 m class telescope at the same wavelength (under similar local seeing conditions).
Wave amplitude suppression factor (upper panel) and the resulting power spectral densities (lower panel) for observations acquired with different spatial resolutions. In the upper panel, the wave amplitude suppression factors (blue dots) are computed with respect to the velocity information displayed in Fig. 33, with the vertical dotted lines highlighting telescope aperture sizes of 4 m (DKIST), 1 m (SST), and 0.1 m. The dashed red line displays an exponential fit (using Eq. (13)), with the fit parameters shown in the figure legend. The lower panel displays the resulting power spectral densities, as a function of spatial resolution, for two key frequencies commonly found in observations of the solar atmosphere, notably 2.5–3.5 mHz (orange dots) and 4.5–5.5 mHz (blue dots). Again, the power spectral densities are fitted using Eq. (13), with the corresponding fit parameters shown in the figure legend. These panels document the importance of spatial resolution when attempting to measure weak oscillatory processes, since poor spatial resolution (either through small telescope aperture sizes or poor local seeing conditions) may result in complete suppression of the observable signal
Both the suppression factor and the resulting power reduction, as a function of spatial resolution, are well modeled by an exponential decay of the form,
where \(A_{0}\) is either the amplitude of the velocity signals or the wave power, \(s_{0}\) is a characteristic spatial length, and C is a constant. Equation (13) characterizes very nicely the impact spatial resolution has on the visible wave characteristics, whereby when the resolution element is larger than the characteristic physical scale of the observed process in the solar atmosphere (i.e., \({\text{FWHM}}>s_{0}\)), then the oscillatory signal is strongly suppressed. This may result in weak oscillatory amplitudes being lost from the final data products, a process that was recently discussed by Jess et al. (2021b) in the context of sunspot oscillations.
Such amplitude suppression effects imply that when estimating the energy flux of waves, one needs to consider the specific spatial resolution achieved and correct the resulting estimates by a factor depending on the FWHM of the instrumental PSF and the local seeing effects. We note that this effect strongly depends on the characteristic spatial length of the processes observed in the solar atmosphere. In order to illustrate the problem we have made use of photospheric observations (i.e., Figs. 33, 34, 35). However, due to the presence of narrow filamentary structures observed in the chromosphere, the power of the oscillations can be even more underestimated at those atmospheric heights.
2.9 Identification of MHD wave modes
In this section, we will not review MHD wave theory in any great detail since this has been covered previously in many books and reviews (see e.g., Aschwanden 2005; Nakariakov and Verwichte 2005; Priest 2014; Jess et al. 2015; Roberts 2019). Instead, we would like to highlight the particular challenges of identifying MHD wave modes from observational data given what is known from MHD wave theory.
2.9.1 Homogeneous and unbounded plasma
In most textbooks, for simplicity MHD waves are rightly introduced by assuming a homogeneous unbounded plasma with a straight and constant magnetic field. This highly idealized plasma configuration only permits propagating Alfvén, slow, and fast magnetoacoustic wave modes. In stark contrast, the Sun’s atmosphere is actually very inhomogeneous and the newest high resolution instrumentation reveal the solar plasma to be ever more finely structured. But let us assume the wavelengths are large enough so that these MHD wave modes do not “feel” the effect of any plasma fine structure, hence allowing us to apply the unbounded homogeneous plasma model, as a zeroth order approximation, to observational data. How can we actually identify the Alfvén, slow, and fast magnetoacoustic MHD wave modes? As we shall discuss, in practical terms, even in this simplest of plasma configurations, each MHD wave mode would actually be non-trivial to identify without ambiguity, even from excellent quality spectropolarimetric data.
First, let us consider the Alfvén wave (Alfvén 1942). The only restoring force of this wave is magnetic tension, but since this wave is incompressible the magnetic field lines remain equidistant from each other as they are oscillating. Hence, although the direction of the magnetic field vectors will change with time as the field lines oscillate the magnitude of the vectors will remain constant. Therefore, this wave will not reveal itself through variations in the magnetic field strength using the Zeeman or Hanle effects. Also, due to its incompressibility the Alfvén wave would not reveal itself in intensity oscillations since the density is not perturbed. This only leaves the velocity perturbations associated with this wave, which could in principle be detected in Doppler measurements. However, to truly identify an Alfvén wave it would have to be established that the velocity perturbations were perpendicular to the magnetic field lines and that the wave vector was not perpendicular to the direction of the magnetic field. To add even more difficulty to the challenge of identifying an Alfvén wave, it is only approximately anisotropic, in the sense that the fastest propagation is along the direction of the magnetic field and only completely perpendicular propagation is forbidden, i.e., the more perpendicular the wave vector becomes relative to the magnetic field the slower the propagation will be.
What about identifying the slow and fast magnetoacoustic modes? The allowed directions for the slow magnetoacoustic wave vector are very similar to that of the Alfvén wave, meaning that it is only approximately anisotropic and propagation perpendicular to the magnetic field direction is forbidden. However, unlike the Alfvén wave, the slow magnetoacoustic wave is compressible and should reveal itself in intensity oscillations if the amplitude of the perturbations are large enough relative to the background. However, to establish even more convincing evidence, a slow magnetoacoustic wave requires validation that the plasma and magnetic pressure perturbations are in anti-phase. Of course, this is not an easy task in observational data and would require both a fortuitous line-of-sight and an excellent signal-to-noise ratio to determine perturbations in both intensity and Zeeman/Hanle effect measurements. In contrast to the Alfvén and slow magnetoacoustic waves, the fast magnetoacoustic wave is more isotropic in nature since it can also propagate perpendicular to the magnetic field. A further key difference to the slow magnetoacoustic wave is that the plasma and magnetic pressure perturbations associated with a fast magnetoacoustic wave are in phase. To show this from observational data would provide compelling evidence that a fast magnetoacoustic wave mode has indeed been identified, but, as with showing the anti-phase behavior between plasma and magnetic pressures for a slow magnetoacoustic wave, this is not a trivial task, even with excellent quality spectropolarimetric data.
There are also more subtle points in distinguishing between the Alfvén, slow, and fast magnetoacoustic wave modes depending on the value of plasma-\(\beta \), which itself is difficult to determine from observational data. Importantly, for MHD wave modes the value of plasma-\(\beta \) also indicates the relative values of the sound and Alfvén speeds. Especially problematic is the case when the sound speed is close to the Alfvén speed, since here the propagation speeds of the Alfvén, slow and fast magnetoacoustic waves along the direction of the magnetic field are practically indistinguishable. This effect is clearly demonstrated in Fig. 36, which is based on the ‘NC5’ flux tube model presented by Bruls and Solanki (1993), and clearly shows how the localized velocities associated with different wave modes can become difficult to disentangle in the lower solar atmosphere, hence providing some ambiguity when attempting to diagnose the true wave mode from the propagation velocity alone. But remember, the nuanced discussion we have had here on wave mode identification assumed that the solar plasma was both homogeneous and unbounded. In practical terms, it is more likely that the analysis of waves in the lower solar atmosphere will be directly related to their excitation in, and propagation through, large scale magnetic structures such as sunspot and pores (see Sect. 3.2) or smaller scale structures such as spicules and fibrils (see Sect. 3.4). In such cases the most applied model is that of the magnetic cylinder (e.g., Wentzel 1979; Wilson 1979, 1980; Spruit 1982; Edwin and Roberts 1983), which we shall discuss next.
2.9.2 Magnetic cylinder model
The advantage of the magnetic cylinder model is that it allows for the key plasma parameters, e.g., magnetic field strength and plasma density, to differ inside and outside of the flux tube, allowing us to introduce inhomogeneity in the direction perpendicular to the cylinder axis. In this model, relative to the cylindrical coordinates \((r, \theta , z)\), where r, \(\theta \), and z are the radial, azimuthal, and axial directions, respectively, waves can either be standing or propagating in all three orthogonal directions (see the left panel of Fig. 37). If the wave is propagating in the radial direction this is a so-called “leaky” wave, which is not trapped by the cylindrical waveguide and damps due to MHD radiation. The so-called “trapped” modes are standing in the radial direction with the greatest wave energy density in the internal region of the cylinder. Outside of the cylinder the trapped mode is evanescent and decays with increasing distance from the tube.

Images reproduced with permission from Arregui et al. (2005, left), copyright by ESO; and Morton et al. (2012, middle and right panels), copyright by Macmillan
A typical cylindrical flux tube model (left panel) represented by a straightened magnetic tube of length, L, and radius, R. The magnetic field, B, is uniform and parallel to the z-axis and the whole configuration is invariant in the azimuthal direction, \(\theta \) (labeled as \(\varphi \) in the diagram). In the schematic, the density varies in a non-uniform transitional layer of width, l, from a constant internal value, \(\rho _{i}\), to a constant external value in the local plasma environment, \(\rho _{e}\). The middle and right panels show the effects of \(m=0\) (sausage) and \(m=1\) (kink) wave perturbations, respectively, to the equilibrium flux tube. The sausage wave (middle) is characterized by an axi-symmetric contraction and expansion of the tube’s cross-section. This produces a periodic compression/rarefaction of both the plasma and magnetic field. The kink wave (right) causes a transverse displacement of the flux tube. In contrast to the sausage wave, the kink wave displacement/velocity field is not axi-symmetric about the flux tube axis. The red lines show the perturbed flux tube boundary and thick arrows show the corresponding displacement vectors. The thin arrows labelled B show the direction of the background magnetic field.
Beyond the basic descriptions of whether the mode is “leaky” or “trapped”, the azimuthal integer wave number, m, defines whether the waves are the so-called “sausage”, “kink”, or “fluting” modes. The sausage mode has \(m=0\) and is azimuthally symmetric, the kink mode has \(m=1\) and is azimuthally asymmetric (see the middle and right panels of Fig. 37). The fluting modes are higher order in the azimuthal direction with \(m \ge 2\). A further classification of wave types in a magnetic cylinder is “body” or “surface” modes. A body wave is oscillatory in the radial direction inside the tube and evanescently decaying outside. Because the body wave is oscillatory inside the tube, it has a fundamental mode in the radial direction and also higher radial harmonics. In contrast, a surface wave is evanescent inside and outside of the tube with its maximum amplitude at the boundary between the internal and external plasma. Since it is strictly evanescent inside the tube, the surface mode cannot generate higher radial harmonics.
At this point it will be worth explaining why confusion has arisen over the years since the seminal publication by Edwin and Roberts (1983), who also introduced the terms “fast” and “slow” to classify the propagation speeds of MHD wave modes along the axis of the magnetic cylinder. In the dispersion diagrams of a magnetic cylinder, distinct bands appear for a particular wave mode where the axial phase speed is bounded by characteristic background speeds. As an example, we can model a photospheric waveguide as being less dense than the surrounding plasma and having a stronger magnetic field internally than externally. This would be a reasonable basic model for, e.g., a pore or sunspot, where the internal density depletion is a result of the increased magnetic pressure (Maltby et al. 1986; Low 1992; Cho et al. 2017; Gilchrist-Millar et al. 2021; Riedl et al. 2021). In this case, we can form the inequality of the characteristic background speeds as \(v_A>c_e>c_0>v_{Ae}\), where \(v_A\) is the internal Alfvén speed, \(c_e\) is the external sound speed, \(c_0\) is the internal sound speed, and \(v_{Ae}\) is the external Alfvén speed. This results in a slower band with phase speeds between \([c_T, c_0]\), where the internal tube speed, \(c_T\), is defined as,
In addition, a faster band also exists with phase speeds between \([c_0, c_e]\). Wave modes with phase speeds below the “slow” band and above the “fast” band are not trapped modes (having real \(\omega \) and \(k_z\) values). The “slow” and “fast” bands for these chosen photospheric conditions are shown in the dispersion diagram in the left panel of Fig. 38.
Left panel: A dispersion diagram is shown for a representative photospheric magnetic cylinder. It can be seen that there are two distinct horizontal bands with slower and faster phase speeds. The fast band is bounded between \([c_0,c_e]\) and the slow band between \([c_T,c_0]\). The adjectives “slow” and “fast” here have a quite distinct meaning from the terms slow and fast when referring to the magnetoacoustic wave modes of a homogeneous and unbounded plasma. Right panel: A cartoon of theoretically predicted MHD wave modes in a sunspot, and their possible sources, based on the magnetic cylinder model of Edwin and Roberts (1983). Images adapted from Edwin and Roberts (1983, left panel) and Evans and Roberts (1990, right panel)
Although Edwin and Roberts (1983) used the perfectly apt adjectives, “slow” and “fast”, to describe the phase speed bounds of these distinct bands of trapped MHD wave modes, they have quite a different physical meaning to the terms of the slow magnetoacoustic and fast magnetoacoustic waves from the homogeneous and unbounded plasma model. This is most clearly illustrated when comparing the same label “fast” in both scenarios. For a cylindrical waveguide any trapped fast MHD mode is strictly anisotropic since the propagating wave vector is restricted to being absolutely parallel to magnetic field direction, which is also aligned with the cylinder axis. However, a fast magnetoacoustic wave in a homogeneous plasma can propagate with any angle relative to the magnetic field orientation.
There is a special class of incompressible Alfvén modes that can exist in a magnetic cylinder with any azimuthal wave number, m, the so-called torsional Alfvén waves (see e.g., Spruit 1982). Like the Alfvén wave in a homogeneous plasma, the only restoring force is magnetic tension. However, torsional Alfvén waves are strictly anisotropic since they can only propagate along the direction of the tube axis, whereas their counterpart in a homogeneous plasma can propagate at any angle (with the exception of perpendicular) relative to the magnetic field. The torsional Alfvén wave can only be excited if the driver itself is incompressible, meaning that the tube boundary is not perturbed at all in the radial direction. However, in reality it likely that the boundary of solar magnetic flux tubes are perturbed to some degree in the radial direction. If the boundary is only slightly perturbed in the radial direction, and the dominant perturbations are in the axial direction, then this will excite a slow mode. If the radial perturbation dominates over the axial perturbation, resulting in a greater perturbation of the boundary, then this will excite a fast mode. The greater radial perturbation for a fast mode means that magnetic tension plays a larger role in the restoring force than for a slow mode, where the longitudinal compressive forces of plasma and magnetic pressure dominate.
Understanding the phase relations between the restoring forces for MHD wave modes in a magnetic cylinder is not as straightforward as it is for the three possible MHD modes in a homogeneous plasma. This is because the phase relations between plasma pressure, magnetic pressure, and magnetic tension restoring forces depend on whether the wave is propagating or standing in each of the three orthogonal directions, i.e., radial (r), azimuthal \((\theta )\), and axial (z). Also, the radial spatial structuring of the plasma in a magnetic cylinder means that perturbed MHD variables, such as the magnetic field \((B_r,B_{\theta },B_z)\) and velocity \((v_r,v_{\theta },v_z)\) components, are related, not only by time derivatives, but spatial derivatives dependent on the variation of the background plasma properties.
A simplified thin tube or “wave on a string” approximation was made by Fujimura and Tsuneta (2009) to derive the phase relations between \(v_r\) and \(B_r\) for a kink mode, and \(v_z\) and \(B_z\) for a sausage mode. This was done for both propagating and standing waves in the axial direction, but caution should be taken in applying these results to structures of finite width. A more detailed investigation into the phase relations of these MHD variables was done for the sausage mode by Moreels and Van Doorsselaere (2013), utilizing a magnetic cylinder of finite width under photospheric conditions. Like Fujimura and Tsuneta (2009), this model predicted the phase relations for both standing and propagating waves in the axial direction. A note of caution should be introduced here to state that both the models of Fujimura and Tsuneta (2009) and Moreels and Van Doorsselaere (2013) assume the kink and sausage modes are “free” oscillations of the structure and are not being driven. To correctly derive the phase relations between the MHD wave variables in a driven system demands that system is solved as an initial value problem. However, currently the exact spatial and temporal structures of the underlying drivers of the waves observed in pores and sunspots are not universally understood.
Although the phase relations between the perturbed variables for any MHD wave mode may be not simple to predict theoretically, at the least the spatial structure of these variables (independent of time), providing the cross-section of the wave guide is resolved (e.g., particularly in the case of larger magnetic structures such as pores and sunspots), should correlate in straightforward way. First, let us consider a fixed axial position, z, which for a vertical tube would correspond to a fixed height in the solar atmosphere. If the magnetic cylinder is oscillating with an eigenmode, then the variables related to compressible axial motion, i.e., \(v_z\), \(B_z\) and plasma pressure (also related to perturbations in temperature and plasma density), should have the same spatial structure in the radial (r) and azimuthal \((\theta )\) directions. Likewise, the spatial structure of variables related to radial perturbations of the magnetic field, i.e., \(v_r\) and \(B_r\), should be consistent. The same is also true for the variables that relate to the torsional motions of the magnetic field, i.e., \(v_{\theta }\) and \(B_{\theta }\). Again, all these theoretical predictions assume free oscillations of the entire magnetic structure, e.g., a pore or sunspot. If the oscillations are being driven, then this is a more complicated and computationally expensive modeling problem to solve. Also, the spatial scale of the driver relative to the size of the magnetic structure is crucial. To excite the global eigenmodes of magnetic structures the driver has to be at least as large as the structure itself. If the driver is much smaller than the magnetic structure, it will still excite localized MHD waves, but these will not be global eigenmodes of the entire magnetic structure. This too requires a different modeling approach, see e.g., Khomenko and Collados (2006), who modeled p-mode propagation and refraction through sunspots.
High resolution images of sunspots, pores, magnetic bright points, and fibrillar structures are continually telling us that modeling these features using cylindrical flux tube geometries, while more mathematically simplistic, is far from realistic. Even from basic membrane models, in which separation of variables is possible, the cross-sectional shape has a fundamental effect on the structure of the eigenfunctions. For elliptical magnetic flux tubes, Aldhafeeri et al. (2021) investigated the effect of eccentricity, \(\epsilon =\sqrt{1-b^2/a^2}\), where a and b are the semi-major and semi-minor axes, respectively, on the spatial structure of eigenfunctions. See, for example Fig. 39, which shows two sunspot umbrae fitted with ellipses with eccentricities \(\epsilon =0.58\) and \(\epsilon =0.76\). These are not negligible values since a circle has \(\epsilon =0\). Figure 40 shows \(m=1\) (kink) and \(m=2,3\) (fluting) fast body modes where the phase is odd with respect to the major axis as eccentricity increases, while Fig. 41 shows the same modes where the phase is even with respect to the major axis. Although all MHD wave modes in flux tubes of elliptical cross-section have their spatial structure distorted when compared to their equivalent versions in flux tubes of circular cross-section, it can be seen that the fluting modes that have even phase with respect to the major axis (shown in Fig. 41) become notably different in character as eccentricity increases, since previously distinct regions of phase or anti-phase end up coalescing. This advancement from the cylindrical flux tube model demonstrates that more sophisticated modeling of magnetic flux tubes with more realistic, and hence more irregular, cross-sectional shapes is required to more accurately interpret what type of wave modes are present in pores and sunspots. Recently this was done by Albidah et al. (2022) and Stangalini et al. (2022) to identify MHD wave modes in sunspot umbrae and this will be discussed in Sect. 3.2.
Two active regions, NOAA AR12565 (left) and NOAA AR12149 (right), captured in the G-band by ROSA at the Dunn Solar Telescope. To show the departure from circular cross-sectional shape, ellipses are fitted to the sunspot umbrae. The eccentricity of the left umbra is \(\epsilon =0.76\), while the right umbra is \(\epsilon =0.58\). Image adapted from Aldhafeeri et al. (2021)
The normalized density perturbations of fast body wave modes under representative coronal conditions for the different values of eccentricity \(\epsilon \). Note that the eigenfunctions for slow body wave modes under photospheric conditions would have a very similar appearance. From top to bottom, the \(m=1\) (kink) and \(m=2,3\) (fluting) modes are shown which have an odd phase structure with respect to the major axis of the ellipse. Image adapted from Aldhafeeri et al. (2021)
In Sect. 2.8 the crucial issue of spatial resolution was discussed. In smaller scale magnetic structures, such as off-limb spicules or on disc fibrils, it is not possible to observe the true cross-section of the wave guide (as is possible for larger on-disc features such as pores and sunspots) in order to identify eigenmodes. However, fast sausage and kink modes can still be identified in these smaller structures if the amplitude of the radial motion (i.e., transverse to the magnetic field direction) is large enough. The kink mode is the only cylinder mode which causes a transverse displacement of the axis. For smaller magnetic structures, such as fibrils, the kink mode will appear as a “swaying” motion. If the radial motion of the fast sausage mode is large enough, then this causes periodic changes in the width of the structure, which can be resolved. Wave mode identification in smaller magnetic structures is addressed in detail in Sect. 3.4. As for larger scale magnetic waveguides, where the cross-section can be resolved fully, such as in pores or sunspots, the right panel of Fig. 38 shows the wide variety of theoretically predicted MHD wave modes, including slow/fast and body/surface, that can exist in such structures based on the magnetic cylinder model of Edwin and Roberts (1983). Recent progress in the identification of such wave modes from observations is discussed in Sect. 3.3.
Across Sect. 2, we have discussed the fundamental theoretical considerations of waves manifesting in the solar atmosphere (Sect. 2.9), we have provided an overview of the techniques used to characterize them (Sects. 2.2–2.7), and summarized the challenges faced in light of variable spatial resolution (Sect. 2.8). Regardless of these challenges, over the last number of decades the solar community has overcame many obstacles, which has allowed for the successful acquisition, extraction, and identification of many different types of wave modes across a wide variety of solar features. In the following section, we will overview recent discoveries in the field of waves in the lower solar atmosphere, as well as comment on the difficulties still facing the global community in the years ahead.
3 Recent studies of waves
In the past, review articles have traditionally segregated wave activity in the solar atmosphere into a number of sub-topics based on the specific wave types and structures demonstrating the observed behavior. For example, Jess et al. (2015) divided up the review content on a feature-by-feature basis, including sections related to compressible and incompressible waveforms, which were subsequently further sub-divided into quiet Sun, magnetic network, and active region locations. However, as modern observations and modeling approaches continue to produce data sequences with ever improving spatial resolutions, placing the physical boundary between two locations becomes even more challenging. Indeed, emerging (and temporally evolving) magnetic fields often blur the boundaries between magnetic network elements, pores, proto-sunspots, and fully developed active regions. Hence, it is clear that solar complexity continues to increase with each improvement in spatial resolution made. As a result, dividing the content between previously well-defined structures becomes inappropriate, which is even more apparent now that mixed MHD waves (e.g., compressible and incompressible modes; Morton et al. 2012) are being identified in a broad spectrum of magnetic features.
Hence, for this topical review we employ just three (deliberately imprecise) sub-section headings, notably related to ‘global wave modes’, as well as ‘large-scale’ and ‘small-scale’ structures. This is to avoid repetition and confusion, and to allow the overlap between many of the observables in the Sun’s atmosphere to be discussed in a more transparent manner. Importantly, while discussing the recent developments surrounding wave activity in the lower solar atmosphere, we will attempt to pinpoint open questions that naturally arise from the cited work. We must stress that closing one research door more often than not opens two (or more) further avenues of investigation. Therefore, discussion of the challenges posed is not to discredit the cited work, but instead highlight the difficult research stepping stones facing the solar physics community over the years and decades to come.
3.1 Global wave modes
The field of helioseismology has employed long-duration data sequences (some spanning multiple continuous solar cycles; Liang et al. 2018) to uncover the internal structure and dynamics of the Sun through its global oscillation properties. Pioneering observations by Frazier (1968) suggested, for the first time, the presence of dual oscillating modes in the solar atmosphere, something that contradicted previous interpretations where the observed oscillations were simply considered to be an atmospheric response to granular impacts. It was subsequently shown that a variety of global wavenumbers could be seen in the photospheric velocity field of the C i 538 nm line (Deubner 1975). Importantly, the pioneering work of Deubner (1975) revealed clear ridges in photospheric \(k{-}\omega \) power spectra, which helped to highlight, for the first time, that the ubiquitous 5-min p-mode oscillations are in-fact resonant eigenmodes of the Sun. Novel observations acquired during austral summer at the South Pole discovered global 5-min global oscillations at a wide range of horizontal wavelengths, revealing the true extent of oscillation modes associated with global solar resonances (Grec et al. 1980; Duvall and Harvey 1983). Traditionally, in the field of helioseismolgy, the Sun is considered as an approximate spherically symmetric body of self-gravitating fluid that is suspended in hydrostatic equilibrium (Christensen-Dalsgaard et al. 2000). This results in the modes of solar oscillations being interpreted as resonant vibrations, which can be represented as the product of a function of radius and a spherical harmonic, \(Y_{l}^{m}(\theta , \phi )\). Here, l relates to the horizontal scale on each spherical shell (commonly referred to as the ‘angular degree’), while m determines the number of nodes in solar longitude (commonly referred to as the ‘azimuthal order’).
The specific modes of oscillation can be divided up into three main categories: (1) Pressure modes (p-modes), which are essentially acoustic waves where the dominant restoring force is pressure, providing frequencies in the range of \(\sim 1{-}5\) mHz and angular degrees spanning \(0 \le l \le 10^{3}\) (Rhodes et al. 1997; Kosovichev 2011; Korzennik et al. 2013), (2) Internal gravity modes (g-modes), where the restoring force is predominantly buoyancy (hence linked to the magnitude of local gravitational forces), which typically manifest in convectively stable regions, such as the radiative interior and/or the solar atmosphere itself (Siegel and Roth 2014), and (3) Surface gravity modes (f-modes), which have high angular degrees and are analogous to surface waves in deep water since they obey a dispersion relation that is independent of the stratification in the solar atmosphere (Mole et al. 2007). In the limit that the wavelength is much smaller than the solar radius these wave are highly incompressible. The main restoring force for f-modes is gravity, which acts to resist wrinkling of the Sun’s surface.
The intricacies of helioseismology become even more complex once isolated magnetic features, such as sunspots, develop within the solar atmosphere, which impact the velocities and travel times of the embedded global wave modes (Braun and Lindsey 2000; Rajaguru et al. 2001, 2004, 2019; Kosovichev 2012; Schunker et al. 2013, 2016). A complete overview of the progress in helioseismology is beyond the scope of the present review. Instead, we refer the reader to the vast assortment of review articles that focus on the widespread development of helioseismology over the last few decades (e.g., Deubner 1983; Bonnet 1983; Christensen-Dalsgaard 2002; Gizon and Birch 2005; Gizon et al. 2010, 2017; Gough 2013; Basu 2016; Buldgen et al. 2019)
Importantly, the magnetic field in the solar photosphere is inhomogeneous and found in discrete concentrations across all spatial scales (Zwaan 1987). Outside of the magnetic concentrations, where plasma pressure and gravity are the dominant restoring forces, longitudinal acoustic waves (i.e., p-modes) are generated at the top of the convection zone from the turbulent motions constituting the convective motion (Stein 1967; Goldreich and Kumar 1990; Bogdan et al. 1993). The p-modes can propagate upwards and contribute to heating of the higher layers if their frequency is larger than the acoustic cut-off frequency (Ulmschneider 1971b; Wang et al. 1995). Thus, the acoustic waves can dissipate their energy in the solar chromosphere by forming shocks (as a result of gas-density decreases with height), which are manifested in intensity images as, e.g., intense brightenings (Rutten and Uitenbroek 1991; Carlsson and Stein 1997; Beck et al. 2008; Eklund et al. 2020, 2021), or drive, e.g., Type i spicules, in the so-called ‘magnetic portals’ (Jefferies et al. 2006). Moreover, Skogsrud et al. (2016) showed that these shocks are associated with dynamic fibrils in an active region they exploited from observations with SST and the Interface Region Imaging Spectrograph (IRIS; De Pontieu et al.