Skip to main content
Log in

Automated ground-based cloud recognition

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Recognition of naturally occurring objects is a challenging task. In particular, the recognition of clouds is particularly challenging as the texture of such objects is extremely variable under different atmospheric conditions. There are several benefits of a practical system that can detect and recognise clouds in natural images especially for applications such as air traffic control. In this paper, we test well-known texture feature extraction approaches for automatically training a classifier system to recognise cumulus, towering cumulus, cumulo-nimbus clouds, sky and other clouds. For cloud recognition, we use a total of five different feature extraction methods, namely autocorrelation, co-occurrence matrices, edge frequency, Law’s features and primitive length. We use the k-nearest neighbour and neural network classifiers for identifying cloud types in test images. This exhaustive testing gives us a better understanding of the strengths and limitations of different feature extraction methods and classification techniques on the given problem. In particular, we find that no single feature extraction method is best suited for recognising all classes. Each method has its own merits. We discuss these merits individually and suggest further improvements in this difficult area.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

References

  1. Aha DW, Blankert RL (1994) Feature selection for case-based classification of cloud types: an empirical evaluation. Proceedings of AAAI-94 workshop on case based reasoning, AAAI Press, Menlo Park, CA

  2. Aha DW, Blankert RL (1997) Cloud classification using error-correcting output codes. Artif Intell Appl Nat Resour Agric Environ Sci 11(1):13–28

    Google Scholar 

  3. Augustejin MF (1995) Performance evaluation of texture measures for ground cover identification in satellite images by means of a neural-network classifier. IEEE Trans Geosci Remote Sens 33:616–625

    Article  Google Scholar 

  4. Baraldi A, Parmigianni F (1995) An investigation of the textural characteristics associated with gray level co-occurrence matrix statistical parameters. IEEE Trans Geosci Remote Sens 33(2):293–304

    Article  Google Scholar 

  5. Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press, Oxford

    Google Scholar 

  6. Blankert RL Cloud classification of AVHRR imagery in maritime regions using a probabilistic neural network. J Appl Meteorol 33:909–918

  7. Conners RW, Harlow CA (1980) A theoretical comparison of texture algorithms. IEEE Trans Pattern Anal Mach Intell 2(3):204–222

    MATH  Google Scholar 

  8. Conover JH (1962) Cloud interpretation from satellite altitudes Technical Report AFCRL-62680. Air Force Cambridge Research Lab, Cambridge

    Google Scholar 

  9. Ebert EE (1987) A pattern recognition technique for distinguishing surface and cloud types in the polar regions. J Clin Appl Meteorol 26:1412–1427

    Article  Google Scholar 

  10. Gordon DF, Tag PM (1995) Unsupervised classification procedures applied to satellite cloud data. Technical Report AIC95-005, Navy Center for Applied Research in Artificial Intelligence, Washington

  11. Gu ZQ, Duncan CN, Grant PM, Cowan CFN, Renshaw E, Mugglestone MA (1991) Textural and spectral features as an aid to cloud classification. Int J Remote Sens 12(5):953–968

    Article  Google Scholar 

  12. Haddon JF, Boyce JF (1993) Co-occurrence matrices for image analysis. IEE Electron Commun Eng J 5(2):71–83

    Google Scholar 

  13. Haralick RM, Shanmugam K, Dinstein I (1973) Textural features for image classification. IEEE Trans Syst Man Cybern SMC-3:610–621

    Article  Google Scholar 

  14. Harris R, Barrett EC (1978) Toward an objective cloud analysis. J Appl Meteorol 17:1258–1266

    Article  Google Scholar 

  15. Kittler J, Pairman D (1985) Contextual pattern recognition applied to cloud detection and identification. IEEE Trans Geosci Remote Sens 23(6):855–863

    Article  Google Scholar 

  16. Kuo KS, Welch RM, Sengupta SK (1988) Structural and textural characteristics of cirrus clouds observed using high spatial resolution Landsat imagery. J Appl Meteorol 27:1242–1260

    Article  Google Scholar 

  17. Lamei N, et al (1994) Cloud type discrimination via multispectral textural analysis. Opt Eng 33:1303–1313

    Article  Google Scholar 

  18. Laws KI (1980) Textured image segmentation. PhD Thesis, University of Southern California, Electrical Engineering

  19. Lee J, Weger RC, Sengupta SK, Welch RM (1990) A neural network approach to cloud classification. IEEE Trans Geosci Remote Sens 28(5):846–855

    Article  Google Scholar 

  20. Ohanian PP, Dubes RC (1992) Performance evaluation for four class of texture features. Pattern Recognit 25(8):819–833

    Article  Google Scholar 

  21. Ojala T, Pietikainen M (1996) A comparative study of texture measures with classification based on feature distributions. Pattern Recognit 29(1):51–59

    Article  Google Scholar 

  22. Pankiewicz GS (1995) Pattern recognition techniques for the identification of cloud and cloud systems. Meteorol Appl 2:257–271

    Article  Google Scholar 

  23. Parikh JA (1977) A comparative study of cloud classification techniques. Remote Sens Environ 6:67–81

    Article  Google Scholar 

  24. Parikh JA (1978) Cloud classification from visible and infrared SMS-1 data. Remote Sens Environ 7:85–92

    Article  Google Scholar 

  25. Reed TR, Buf JMH (1993) A review of recent texture segmentation and feature extraction techniques. Comput Vis Image Process Graph 57(3):359–372

    Google Scholar 

  26. Singh S, Sharma M (2001) Texture analysis experiments with Meastex and Vistex benchmarks. Proceedings of international conference on advances in pattern recognition, Rio, Brazil, Lecture notes in computer science, no. 2013. Springer, Germany, pp 417–424

  27. Smith SM, Brady JM (1997) SUSAN—a new approach to low level image processing. Int J Comput Vis 23(1):45–78

    Article  Google Scholar 

  28. Sonka M, Hlavac V, Boyle R (1999) Image processing, analysis and machine vision. PWS publishing, San Francisco

    Google Scholar 

  29. SPRLIB: http://www.ph.tn.tudelft.nl/~sprlib/

  30. Tian B, Shaikh MA, Azimi-Sadjadi MR, Haar THV, Reinke DL (1999) A study of cloud classification with neural networks using spectral and textural features. IEEE Trans Neural Netw 10(1):138–151

    Article  Google Scholar 

  31. Tuceyran M, Jain AK (1993) Texture analysis. In: Chen CH, Pau LF, Wang PSP (eds) Handbook of pattern recognition and computer vision, chapter 2, World Scientific, Singapore, pp 235–276

  32. van Gool L, Dewaele P, Oosterlinck A (1985) Texture analysis. Comput Vis Graph Image Process 29:336–357

    Article  Google Scholar 

  33. Visa A, Valkealahti K, Simula O (1991) Cloud detection based on texture segmentation by neural network methods. Proceedings of IEEE international joint conference on neural networks, vol 2. Singapore, November 1991, pp 1001–1006

  34. Weiss SM, Kulikowski CA (1991) Computer systems that learn. Morgan Kaufmann, ISBN 1-55860-065-5

  35. Welch RM, Kuo KS, Sengupta SK, Chen DW (1988) Cloud field classification based upon high spatial resolution textural features: gray level co-occurrence matrix approach. J Geophys Res 93:663–681

    Article  Google Scholar 

  36. Weszka JS, Dyer CR, Rosenfeld A (1976) A comparative study of texture measures for terrain classification. IEEE Trans Syst Man Cybern 6:269–285

    MATH  Google Scholar 

  37. Wu R, Weinman JA, Chin RT (1985) Determination of rainfall rates from GOES satellite images by a pattern recognition technique. J Atmos Oceanic Technol 2:314–330

    Article  Google Scholar 

  38. Yhann SR, Simpson JJ (1995) Application of neural networks to AVHRR cloud segmentation. IEEE Trans Geosci Remote Sens 33(3):590–603

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Mark Watson, Katherine Blair, and Lluis Vinagre from National Air Traffic Control and the Met Office, UK.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maneesha Singh.

Appendices

Appendix Texture features used for cloud recognition

1.1 Autocorrelation

An autocorrelation function can be evaluated, that measures this coarseness. This function evaluates the linear spatial relationships between primitives. If the primitives are large, the function decreases slowly with increasing distance whereas it decreases rapidly if texture consists of small primitives. However, if the primitives are periodic, then the autocorrelation increases and decreases periodically with distance. The set of autocorrelation coefficients C shown below are used as texture features:

$$ C_{ff} (p,q) = \frac{{MN}} {{(M - p)(N - q)}}\frac{{\sum\nolimits_{i = 1}^{M - p} {\sum\nolimits_{j = 1}^{N - q} {f(i,j)f(i + p,j + q)} } }} {{\sum\nolimits_{i = 1}^M {\sum\nolimits_{j = 1}^N {f^2 (i,j)} } }}, $$

where p, q is the positional difference in the i, j direction, and M, N are image dimensions.

1.2 Co-occurrence matrices

Co-occurrence matrix is the joint probability occurrence of grey levels i and j for two pixels with a defined spatial relationship in an image. The spatial relationship is defined in terms of distance d and angle θ. If the texture is coarse, and distance d is small compared to the size of the texture elements, the pairs of points at distance d should have similar grey levels. Conversely, for a fine texture, if distance d is comparable to the texture size, then the grey levels of points separated by distance d should often be quite different, so that the values in the co-occurrence matrix should be spread out relatively uniformly. Hence, a good way to analyse texture coarseness would be, for various values of distance d, some measure of scatter of the co-occurrence matrix around the main diagonal. Similarly, if the texture has some direction, that is, is coarser in one direction than another, then the degree of spread of the values about the main diagonal in the co-occurrence matrix should vary with the direction θ. Thus, texture directionality can be analysed by comparing spread measures of co-occurrence matrices constructed at various distances d and direction θ.

From co-occurrence matrices, a variety of features can be extracted. From each co-occurrence matrix, 14 statistical measures are extracted proposed by Haralick et al. [13]. These are angular second moment, contrast, variance, inverse different moment, sum average, sum variance, sum entropy, entropy, difference variance, difference entropy, two information measures of correlation, and maximum correlation coefficient.

1.3 Edge frequency

A number of edge detectors can be used to yield an edge image from an original image. We can compute an edge-dependent texture description function E as follows:

$$ E(d) = |f(i,j) - f(i + d,j)| + |f(i,j) - f(i - d,j)| + |f(i,j) - f(i,j + d)| + f(i,j) - f(i,j - d)|. $$

This function is inversely related to the autocorrelation function. Texture features can be evaluated by choosing specified distances d (pixel distance).

1.4 Law’s method

Laws observed that certain gradient operators such as Laplacian and Sobel operators accentuated the underlying microstructure of texture within an image. This was the basis for a feature extraction scheme based on a series of pixel impulse response arrays obtained from combinations of 1-D vectors shown below. Each 1-D array is associated with an underlying microstructure and labelled using an acronym accordingly. The arrays are convolved with other arrays in a combinatorial manner to generate a total of 25 masks, typically labelled as L5L5 for the mask resulting from the convolution of the two L5 arrays.

Five 1-D arrays identified by Laws

$$ \begin{aligned} {\text{Level L}}5 & = [\begin{array}{*{20}l} 1 \hfill & 4 \hfill & 6 \hfill & 4 \hfill & 1 \hfill \\ \end{array} ] \\ {\text{Edge E}}5 & = [\begin{array}{*{20}l} { - 1} \hfill & { - 2} \hfill & 0 \hfill & 2 \hfill & 1 \hfill \\ \end{array} ] \\ {\text{Spot S}}5 & = [ - \begin{array}{*{20}l} 1 \hfill & 0 \hfill & 2 \hfill & 0 \hfill & { - 1} \hfill \\ \end{array} ] \\ {\text{Wave W}}5 & = [\begin{array}{*{20}l} { - 1} \hfill & 2 \hfill & 0 \hfill & { - 2} \hfill & 1 \hfill \\ \end{array} ] \\ {\text{Ripple R}}5 & = [\begin{array}{*{20}l} 1 \hfill & { - 4} \hfill & 6 \hfill & { - 4} \hfill & 1 \hfill \\ \end{array} ]. \\ \end{aligned} $$

These masks are subsequently convolved with a texture field to accentuate its microstructure giving an image from which the energy of the microstructure arrays is measured together with other statistics. The energy measure for a neighbourhood centred at F(j,k), S(j,k), is based on the neighbourhood standard deviation computed from the mean image amplitude:

$$ S(j,k) = \frac{1} {{W^2 }}\left[ {\sum\limits_{m = - w}^w {\sum\limits_{n = - w}^w {[F(j + m,k + n) - M(j + m,k + n)]^2 } } } \right]^{1/2} , $$

where W × W is the pixel neighbourhood and the mean image amplitude M(j,k) is defined as:

$$ M(j,k) = \frac{1} {{W^2 }}\sum\limits_{m = - w}^w {\sum\limits_{n = - w}^w {F(j + m,k + n)} } . $$

1.5 Run length encoding

A large number of neighbouring pixels of the same grey level represent a coarse texture, a small number of these pixels represent a fine texture, and the lengths of texture primitives in different directions can be used for texture description. A primitive is a maximum contiguous set of constant grey level pixels located in a line. These can then be described by grey level, length and direction. The texture description features are then based on computation of continuous probabilities of the length and the grey level of primitives in the texture.

Let B(a,r) be the number of primitives of all directions having length r and grey level a. Let A be the area of the region in question, let L be the number of grey level within that region and let N r be the maximum primitive length within the image. The texture description features can then be determined as follows. Let K be the total number of runs:

$$ K = \sum\limits_{a = 1}^L {\sum\limits_{r = 1}^{N_r } {B(a,r)} } . $$

The features are defined as:

  1. (a)

    Short primitives emphasis

    $$ \frac{1} {K}\sum\limits_{a = 1}^L {\sum\limits_{r = 1}^{N_r } {\frac{{B(a,r)}} {{r^2 }}} } . $$
  2. (b)

    Long primitives emphasis

    $$ \frac{1} {K}\sum\limits_{a = 1}^L {\sum\limits_{r = 1}^{N_r } {B(a,r)r^2 } } . $$
  3. (c)

    Grey level uniformity

    $$ \frac{1} {K}\sum\limits_{a = 1}^L {\left[ {\sum\limits_{r = 1}^{N_r } {B(a,r)r^2 } } \right]} . $$
  4. (d)

    Primitive length uniformity

    $$ \frac{1} {K}\sum\limits_{a = 1}^L {\left[ {\sum\limits_{r = 1}^{N_r } {B(a,r)} } \right]} ^2 . $$
  5. (e)

    Primitive percentage

    $$ \frac{K} {{\sum\nolimits_{a = 1}^L {\sum\nolimits_{r - 1}^{N_r } {rB(a,r)} } }} = \frac{K} {A}. $$

Author biographies

Dr. Maneesha Singh is currently a Research Fellow at the Research School of Informatics, Loughborough University. She received her M.Phil. and Ph.D. degrees from the University of Exeter. Her past research investigated image processing optimisation for aviation security for analysing dual X-ray luggage images for explosive detection. Her current work focuses on developing novel machine vision-based inspection for railway and steel industry. She is a member of IEEE, and Membership Secretary of BCS Specialist group on Pattern Analysis and Robotics. She was the Organising Chair of the third International Conference on Advances in Pattern Recognition, 2005, and is the Organising Chair of Summer Schools on Pattern Recognition organised annually in the UK. She has published more than 25 papers in the area of computer vision and machine learning.

Mr. Matthew Glennen did his B.Sc. and M.Sc. degrees from the University of Exeter, UK. He is currently working at Home Office Scientific Development Branch in UK.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Singh, M., Glennen, M. Automated ground-based cloud recognition. Pattern Anal Applic 8, 258–271 (2005). https://doi.org/10.1007/s10044-005-0007-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-005-0007-5

Keywords

Navigation