Abstract
Extreme ultra-violet images of the corona contain information over a wide range of spatial scales, and different structures such as active regions, quiet Sun, and filament channels contain information at very different brightness regimes. Processing of these images is important to reveal information, often hidden within the data, without introducing artefacts or bias. It is also important that any process be computationally efficient, particularly given the fine spatial and temporal resolution of Atmospheric Imaging Assembly on the Solar Dynamics Observatory (AIA/SDO), and consideration of future higher resolution observations. A very efficient process is described here, which is based on localised normalising of the data at many different spatial scales. The method reveals information at the finest scales whilst maintaining enough of the larger-scale information to provide context. It also intrinsically flattens noisy regions and can reveal structure in off-limb regions out to the edge of the field of view. We also applied the method successfully to a white-light coronagraph observation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Extreme ultra-violet (EUV) observations currently provide the most important source of information on the low solar corona. As new EUV instruments are developed, the temporal, spatial, and spectral resolution becomes ever finer, giving new insight into the coronal and chromospheric structure and dynamics. The Atmospheric Imaging Assembly (AIA: Lemen et al. 2011) onboard the Solar Dynamics Observatory (SDO: Pesnell, Thompson, and Chamberlin 2012) provides very fine temporal and spatial resolution of the Sun at multiple wavelengths and is having a strong impact on the field. Even as the community develop methods to digest the huge volume of data from AIA/SDO, new instruments are planned and tested with even finer resolution (e.g. High-Resolution Coronal Imager (Hi-C), see Cirtain et al. 2013).
Despite the development of automated detection tools for EUV observations (e.g. Martens et al. 2012), most scientific works begin from visual inspection of images. In particular, the volume of data provided by AIA/SDO is so high that low-resolution images are often used as a starting point to find features of interest. The higher-resolution data are then used for further analysis. Image processing is therefore an important step in analysing the data. The scientific return can be improved by the application of processing that better reveals features in the data, particularly in the early stages of analysis where visual inspection is most important.
A common approach to process EUV images is simply to display the square root (or a gamma-curve transformation), or alternatively, the logarithm, of the original pixel values. This is a quick and easy way of reducing the dominance of the image contrast range by a few small bright regions. To reveal dynamic features, time-differencing is used, where a previous image is subtracted from the current image. Such simple processes are commonly used not because of the quality of the output, but because of their simplicity. More advanced image processing methods are not so commonly used owing to their complexity and computational expense.
Wavelet-based techniques have been used by Stenborg and Cobelli (2003) and Stenborg, Vourlidas, and Howard (2008) to greatly improve the visual information available from the Extreme Ultraviolet Imaging Telescope (EIT: Delaboudiniere et al. 1995) aboard the Solar and Heliospheric Observatory (SOHO) and the Extreme UltraViolet Imagers (EUVI: Howard et al. 2008) onboard the Solar Terrestial Relations Observatory (STEREO: Kaiser et al. 2008). The technique involves the decomposition of images into different spatial scales and the filtering/enhancements of features at the multiple resolutions. The technique is computationally expensive but gives very good results. A less sophisticated, yet very efficient technique to reveal features above the limb is based on techniques originally developed for coronagraphs (Morgan, Habbal, and Woo 2006; Druckmüllerová, Morgan, and Habbal 2011). Most relevant to this work is the recently developed Noise Adaptive Fuzzy Equalization (NAFE) method (Druckmüller 2013). The method is inspired by adaptive histogram equalisation, where local statistics govern the output value of a pixel. The method uses a fuzzy membership function of a Gaussian-weighted local set to enhance structural detail, preserve contextual detail, and to reduce noise. The NAFE method results in very clear images without artefacts and with excellent noise reduction. Its one downside is computational expense.
We summarise the problems in visualising features in EUV images (Section 2), introduce the new method (Section 3), apply the method to several images from various instruments (Section 4), and close with a brief summary (Section 5).
2 Observations
An observation from 04 May 2005 00:00 UT by the 171 Å channel of AIA is used as a working example throughout this section. The data were first pre-processed using the standard SDO Solarsoft software ‘aia_prep’. This observation is shown without further image processing in the left panel of Figure 1. The 171 Å channel of AIA has a narrow bandwidth that is dominated by emission from highly ionised iron that has a formation temperature that peaks at ∼ 0.8 MK. The intensity of this image is therefore a function of the emitting plasma temperature and density. The line-of-sight and optical thickness of the plasma also has an effect on the measured intensity, as well as small contributions from other weaker spectral lines that share the same bandpass. The left panel of Figure 1 illustrates the main challenge of revealing information in EUV images. For various physical reasons (mostly due to density), the display range is dominated by the large difference between dark regions and the small bright regions near, or at, the base of active regions. Most of the structural detail appears dark and is hidden. A quick and easy way of revealing some of this structure is by taking the square root of the image (processing of this type is generally known as a gamma transformation). This is shown in the right panel of Figure 1. Although a large improvement on the unprocessed image, there is still much hidden structure, and bright areas become ‘washed out’. In particular, there is very little visible structure off the limb. The same is true of applying a logarithmic transformation, or any one-to-one mapping between input and output.
The dominance of the image contrast range by small bright regions can be shown quantitatively by selecting regions and plotting histograms of pixel values. This is shown in Figure 2 for four areas – quiet Sun, active region base, active region, and off-limb. Note that these are pixel values normalised by the exposure time. The top ∼ 75 % of brightness values is taken by the small very brightest region of the active region (cyan). This leaves the information from all other regions in the lower 25 %. In fact, the off-limb and quiet-Sun regions have pixel values restricted to below ∼ 300. Furthermore, the bright active regions contain a large number of pixels that share the low values of the quiet Sun. There are sharp boundaries between very dark and very bright regions. These characteristics become more extreme when flares occur.
Any image processing that attempts to reveal the information hidden in EUV images must deal with the wide range of brightness values, preserve the steep spatial gradients in brightness, and respect the fact that even the brightest regions contain dark patches. Another challenge is the existence of structures at many spatial scales. We cannot apply edge-detection or enhancement exclusively at small scales since this will enhance some features whilst other structures and the larger-scale context is lost. Furthermore, with full-resolution AIA images of 4096×4096 pixels taken every 11 s (and the anticipation of the resolution of future instruments), computational efficiency is very important.
3 Method – Multi-Scale Gaussian Normalisation
Barring spurious calibration errors, the brightness values in the original EUV image will be positive. Let B be the original image normalised by exposure time. k w is a 2D Gaussian kernel of width w pixels in the x and y image dimensions. Throughout this work, w denotes the one-sigma width of the Gaussian. A normalised image C can be computed by
where
For a given pixel, the numerator of Equation (1) effectively subtracts the local ‘mean’ (weighted by the Gaussian function centred on the pixel), and the denominator σ w is the local ‘standard deviation’, also weighted by the Gaussian kernel. Thus, simply speaking, C is locally normalised to a standard deviation of one and a mean of zero. An example of C, computed with w=20, is shown in the left image of Figure 3.
This process of local normalisation is in essence similar to adaptive histogram equalisation, where each pixel is scaled according to the statistics of the local group of pixels. The main difference is that histogram equalisation enables precise control of the output pixel value range, whereas Gaussian normalisation does not. This problem is solved by means of a further transformation applied to image C,
Since C is distributed across both negative and positive values, the arctan function in Equation (3) serves a purpose similar to a gamma transformation, i.e. pixel values near the zero are amplified and extreme values far from zero are attenuated. This transform ensures the control over the output pixel value range and prevents saturation of output pixel values. An example of C′ is shown in the right image of Figure 3. The parameter k controls the severity of the transformation. A default value of 0.7 gives good results for all the data shown in Section 4. The pixel values of image C have a standard deviation ∼ 1, and a value for k of ∼ 0.7 gives sensible results.
A set of n locally normalised and arctan-transformed images, \(C^{\prime}_{i}\), is created for n values of w i . Values used throughout this work are w=1.25,2.5,5,10,20,40. A global gamma-transformed image, \(C^{\prime}_{g}\) is also created by
γ can be set at values of 2.5 to 4. Throughout this work, we use γ=3.2, and this gives good results for all instruments and bandpasses we have processed. a 0 and a 1 set the minimum and maximum input values, respectively. For processing single images, they can be set to the minimum and maximum image values. For batch processing of a large number of observations, they can be set as the minimum and maximum values of the whole data set. For AIA/SDO observations, these values can be easily found at the start of a batch process by reading the appropriate values stored in the data headers (that is, without having to read in the full images).
The final processed image, I is calculated by a weighted average of the \(C^{\prime}_{i}\) using weights g i , and addition of \(C^{\prime}_{g}\) weighted by a global weight h,
The g i can be calculated by a study of images consisting of only normally distributed random noise. For narrow Gaussian kernels, the mean of the local standard deviations across the whole image 〈σ w 〉 is an underestimate of the global standard deviation, and there is a larger variation in the local standard deviation. This is intuitive – the ratio of the mean of the local standard deviation to the global standard deviation approaches unity with a wider kernel, and there is less variation in the local standard deviation. This is shown in Figure 4. Therefore the weights g i are set at lower values for small w, and approach values near one as w≳3, as listed in Figure 4. As described in Equation (5), the original (normalised) image \(C^{\prime}_{g}\) is included to give contextual information of the largest scale structure. We used h=0.7 in this work. In practice, the weights g i and h may be adjusted according to the desired output, and also according to the type of input image (e.g. wavelength or channel). For most purposes, the g i can be set equal for all scales so that a straightforward mean of the locally normalised \(C^{\prime}_{i}\) is made.
3.1 Pseudocode for MGN
-
1.
Replace spurious negative pixels with zero or local mean/median.
-
2.
Create Gaussian kernel of width w i . Kernel elements should sum to unity.
-
3.
Convolve image with kernel to create local mean image B⊗k w .
-
5.
Calculate difference between image and the local mean image, square the difference, and convolve with kernel. Square-root the resulting image to give ‘local standard deviation’ image σ w (Equation (2)).
-
5.
Calculate normalised image C i by subtracting the local mean image and dividing by the local standard deviation image (Equation (1)). Store the result.
-
6.
Apply arctan transformation on C i to give \(C^{\prime}_{i}\).
-
7.
Repeat 2 – 6 with the different kernel widths w i .
-
8.
Take mean, or weighted mean if preferred, of the \(C^{\prime}_{i}\) to give a weighted mean locally normalised image.
-
9.
Calculate a global gamma-transformed image \(C^{\prime}_{g}\) by applying Equation (4).
-
10.
Sum the weighted mean locally normalised image with the global normalised image \(C^{\prime}_{g}\), with appropriate weight h (as Equation (5)).
Note that considerable efficiency is gained by creating a one-dimensional Gaussian kernel for convolving first along the x-direction, then convolving the resulting image along the y-direction with a transposed kernel. This is far more efficient than convolving directly with a two-dimensional kernel. Compared with the NAFE (Druckmüller 2013), or wavelet-based routines, the MGN method is a simple one, and as Gaussian smoothing is a standard practice employed by most programming languages, can readily be programmed in a few lines of code. Without great rigour, we tested the efficiency of the processing using the Interactive Data Language (IDL) on a MacBook Pro laptop with 8 Gb Ram and a 2.6 GHz Intel Core i7 processor. The computing time increases approximately linearly from ∼ 1 s for a 500×500 image to 10 s for a 2K×2K image. A full AIA/SDO 4096×4096 image takes 40 s. This is extremely fast compared with other methods such as the NAFE (at least an order of magnitude faster), and does not require a high-performance computer.
4 Results
Figure 5 shows the result of applying MGN to the example AIA image of the previous sections. Structure is enhanced down to fine spatial scales, although large-scale context is preserved (that is, bright regions are still brighter than dark regions). Off-limb structure, where present, is enhanced without being swamped by noise. Structure is enhanced in the extremely bright region at the base of an active region (the cyan-bordered region in Figure 2), which originally appeared saturated. Structure can also be seen in dark regions immediately neighbouring the bright region. It is also very useful to be able to trace structure from origins on the disk to off-limb heights. In this example, there are loops and fan-like bright features that extend from the disk and past the limb. Movies created using this processing reveal dynamic features at very small scales such as flows within filament channels, flows along large coronal loops, small-scale activity within active regions, and movement in extended off-limb structures in response to dynamic events.
An example AIA movie is available online ( aia_movie.gif – an animated gif that can be viewed with a web browser), which shows a large portion of the western hemisphere and off-limb region observed in the 171 Å channel during 18 January 2013. Activity and small-scale flows are clearly seen, alongside larger-scale eruptions, and the movement/expansion of loops off the limb out to the edge of the field of view.
Another online movie ( comparison_movie.gif ) compares the appearance of a sequence of AIA observations processed by the MGN (left panel), and by a simple gamma transformation (right panel). This sequence shows a flaring active region and the eruption of a CME observed in the 171 Å channel during 08 March 2011 in the South–West. The disk and off-limb structure is seen far more clearly in the MGN images. Individual loops are resolved, even in the complicated off-limb region to the North of the active region. In the original data, the sequence is dominated by the bright core of the active region, and when the flare peaks, by the small saturated regions centred on the flare. The MGN images show this activity clearly, and also succeed in maintaining a clear and stable view of surrounding structure throughout the movie, including the ‘wobble’ of the active-region loops in response to flaring activity. This enables a more comprehensive analysis of the event, showing connections between activity and movement of structure that can be hidden without MGN processing.
Figure 6 shows a High resolution Coronal Imager (Hi-C) image processed with MGN. A movie showing the northernmost portion of this image is available online ( hic_movie.gif ). Various features such as magnetic braiding (Cirtain et al. 2013) are clearly seen, as are plasma streams along the dark filament channels, and small-scale movement within the magnetic ‘moss’. Such features cannot be seen without appropriate processing. Figure 7 shows the application of MGN to an observation by the Sun Watcher using the APS (SWAP) instrument onboard the PRoject for OnBoard Autonomy 2 (PROBA2) satellite (Berghmans et al. 2006; Seaton et al. 2013). This instrument has a coarser resolution than AIA, but has an extended field of view. Figure 7 reveals an erupting filament out to the extremity of the field of view, and other quiescent structures to ∼ 1.5 R⊙. Again, even low-signal structures are enhanced without too much amplification of noise.
The MGN processing is not limited to EUV observations. Figure 8 shows its application to a white-light coronagraph image of the extended inner corona by the Large-Angle and Spectrometric Coronagraph (LASCO) onboard SOHO. The left image shows an observation of 14 January 2011, with a long-term background subtracted to remove instrumental stray-light and F-corona. A point filter has also been applied to reduce spurious bright pixels. The right image shows an MGN-processed image, with parameters k=0.8, h=0.9 and γ=1. Smaller-scale features are enhanced, in particular faint plumes over the poles that are difficult to see in the original image. In this respect, the MGN is an improvement over the NRGF (Morgan, Habbal, and Woo 2006), although the NRGF provides structural context closer to the true K-corona.
It is difficult to make a qualitative comparison of images created using different processing. Different processes may serve different analysis purposes – the important function for the process described here is to reveal information hidden in the original images. A non-rigorous comparison ‘by eye’ of the same images processed using the MGN and of the NAFE (Druckmüller 2013) suggests that the results are similar, with the NAFE giving slightly better clarity overall, and improved noise suppression – most obvious at large heights off the limb. For the MGN, noise reduction arises naturally from taking the (weighted) mean across many spatial scales. The noise suppression is not as effective as the NAFE, and the MGN method lacks control over the degree of noise suppression. The NAFE method is also better suited to reveal contrast in very bright regions.
5 Summary
The MGN method normalises an image by using the local mean and standard deviation calculated using a Gaussian-weighted sample of local pixels. This normalised image is transformed by the arctan function (similar to a gamma transformation). This is applied over several spatial scales, and the final image is a weighted combination of the normalised components. The results compare well with multi-resolution wavelet enhancement or the NAFE procedure, but is far more computationally efficient. We hope that the MGN will become an established tool for researchers, offering a good compromise between computational time and clarity of the final images. The method is simple to implement, and the lead author is happy to provide the IDL code by email request.
References
Berghmans, D., Hochedez, J.F., Defise, J.M., Lecat, J.H., Nicula, B., Slemzin, V., Lawrence, G., Katsyiannis, A.C., van der Linden, R., Zhukov, A., Clette, F., Rochus, P., Mazy, E., Thibert, T., Nicolosi, P., Pelizzo, M.-G., Schühle, U.: 2006, SWAP onboard PROBA 2, a new EUV imager for solar monitoring. Adv. Space Res. 38, 1807 – 1811. 10.1016/j.asr.2005.03.070 .
Cirtain, J.W., Golub, L., Winebarger, A.R., de Pontieu, B., Kobayashi, K., Moore, R.L., Walsh, R.W., Korreck, K.E., Weber, M., McCauley, P., Title, A., Kuzin, S., Deforest, C.E.: 2013, Energy release in the solar corona from spatially resolved magnetic braids. Nature 493, 501 – 503. 10.1038/nature11772 .
Delaboudiniere, J.-P., Artzner, G.E., Brunaud, J., Gabriel, A.H., Hochedez, J.F., Millier, F., Song, X.Y., Au, B., Dere, K.P., Howard, R.A., Kreplin, R., Michels, D.J., Moses, J.D., Defise, J.M., Jamar, C., Rochus, P., Chauvineau, J.P., Marioge, J.P., Catura, R.C., Lemen, J.R., Shing, L., Stern, R.A., Gurman, J.B., Neupert, W.M., Maucherat, A., Clette, F., Cugnon, P., van Dessel, E.L.: 1995, Extreme-ultraviolet imaging telescope for the SOHO mission. Solar Phys. 162, 291 – 312.
Druckmüller, M.: 2013, A noise adaptive fuzzy equalization method for processing solar extreme ultraviolet images. Astrophys. J. Suppl. 207, 25. 10.1088/0067-0049/207/2/25 .
Druckmüllerová, H., Morgan, H., Habbal, S.R.: 2011, Enhancing coronal structures with the Fourier normalizing-radial-graded filter. Astrophys. J. 737, 88. 10.1088/0004-637X/737/2/88 .
Howard, R.A., Moses, J.D., Vourlidas, A., Newmark, J.S., Socker, D.G., Plunkett, S.P., et al.: 2008, Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI). Space Sci. Rev. 136, 67 – 115. 10.1007/s11214-008-9341-4 .
Kaiser, M.L., Kucera, T.A., Davila, J.M., St. Cyr, O.C., Guhathakurta, M., Christian, E.: 2008, The STEREO mission: An introduction. Space Sci. Rev. 136, 5 – 16. 10.1007/s11214-007-9277-0 .
Lemen, J.R., Title, A.M., Akin, D.J., Boerner, P.F., Chou, C., Drake, J.F., Duncan, D.W., Edwards, C.G., Friedlaender, F.M., Heyman, G.F., Hurlburt, N.E., Katz, N.L., Kushner, G.D., Levay, M., Lindgren, R.W., Mathur, D.P., McFeaters, E.L., Mitchell, S., Rehse, R.A., Schrijver, C.J., Springer, L.A., Stern, R.A., Tarbell, T.D., Wuelser, J.-P., Wolfson, C.J., Yanari, C., Bookbinder, J.A., Cheimets, P.N., Caldwell, D., Deluca, E.E., Gates, R., Golub, L., Park, S., Podgorski, W.A., Bush, R.I., Scherrer, P.H., Gummin, M.A., Smith, P., Auker, G., Jerram, P., Pool, P., Soufli, R., Windt, D.L., Beardsley, S., Clapp, M., Lang, J., Waltham, N.: 2011, The Atmospheric Imaging Assembly (AIA) on the Solar Dynamics Observatory (SDO). Solar Phys. 172. 10.1007/s11207-011-9776-8 .
Martens, P.C.H., Attrill, G.D.R., Davey, A.R., Engell, A., Farid, S., Grigis, P.C., Kasper, J., Korreck, K., Saar, S.H., Savcheva, A., Su, Y., Testa, P., Wills-Davey, M., Bernasconi, P.N., Raouafi, N.-E., Delouille, V.A., Hochedez, J.F., Cirtain, J.W., Deforest, C.E., Angryk, R.A., de Moortel, I., Wiegelmann, T., Georgoulis, M.K., McAteer, R.T.J., Timmons, R.P.: 2012, Computer vision for the Solar Dynamics Observatory (SDO). Solar Phys. 275, 79 – 113. 10.1007/s11207-010-9697-y .
Morgan, H., Habbal, S.R., Woo, R.: 2006, The depiction of coronal structure in white-light images. Solar Phys. 236, 263 – 272. 10.1007/s11207-006-0113-6 .
Pesnell, W.D., Thompson, B.J., Chamberlin, P.C.: 2012, The Solar Dynamics Observatory (SDO). Solar Phys. 275, 3 – 15. 10.1007/s11207-011-9841-3 .
Seaton, D.B., Berghmans, D., Nicula, B., Halain, J.-P., De Groof, A., Thibert, T., Bloomfield, D.S., Raftery, C.L., Gallagher, P.T., Auchère, F., Defise, J.-M., D’Huys, E., Lecat, J.-H., Mazy, E., Rochus, P., Rossi, L., Schühle, U., Slemzin, V., Yalim, M.S., Zender, J.: 2013, The SWAP EUV imaging telescope part I: Instrument overview and pre-flight testing. Solar Phys. 286, 43 – 65. 10.1007/s11207-012-0114-6 .
Stenborg, G., Cobelli, P.J.: 2003, A wavelet packets equalization technique to reveal the multiple spatial-scale nature of coronal structures. Astron. Astrophys. 398, 1185 – 1193. 10.1051/0004-6361:20021687 .
Stenborg, G., Vourlidas, A., Howard, R.A.: 2008, A fresh view of the extreme-ultraviolet corona from the application of a new image-processing technique. Astrophys. J. 674, 1201 – 1206. 10.1086/525556 .
Acknowledgements
We are grateful for comments by the anonymous referee that improved this work. Huw Morgan is grateful for funding from the Coleg Cymraeg Cenedlaethol to Prifysgol Aberystwyth and support of SHINE grant 0962716 and NASA grant NNX08AJ07G to the Institute for Astronomy, University of Hawaii. The work of Miloslav Druckmüller was supported by Grant Agency of Brno University of Technology, project FSI-S-11-3. We acknowledge the High resolution Coronal Imager (Hi-C) instrument team for making the flight data publicly available. MSFC/NASA led the mission and partners include the Smithsonian Astrophysical Observatory in Cambridge, Mass.; Lockheed Martin’s Solar Astrophysical Laboratory in Palo Alto, Calif.; the University of Central Lancashire in Lancashire, England; and the Lebedev Physical Institute of the Russian Academy of Sciences in Moscow.
Author information
Authors and Affiliations
Corresponding author
Electronic Supplementary Material
Below are the links to the electronic supplementary material.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Morgan, H., Druckmüller, M. Multi-Scale Gaussian Normalization for Solar Image Processing. Sol Phys 289, 2945–2955 (2014). https://doi.org/10.1007/s11207-014-0523-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11207-014-0523-9