Ellipticity and Circularity Measuring via Kullback–Leibler Divergence
Abstract
Using the Kullback–Leibler divergence we provide a simple statistical measure which uses only the covariance matrix of a given set to verify whether the set is an ellipsoid. Similar measure is provided for verification of circles and balls. The new measure is easily computable, intuitive, and can be applied to higher dimensional data. Experiments have been performed to illustrate that the new measure behaves in natural way.
Keywords
Kullback–Leibler divergence Circularity measure Ellipticity measure Image processing1 Introduction
Human image analysis still poses huge challenges for a scientist. Automatic computer object recognition and interpretation of an image is crucial in building excellent image analysis software, especially for extracting higherlevel information.
Reallife applications forced to develop the idea to describe the object characteristics by using a set of numbers, thus enabling a spectrum of numerical quantifications. Many shape descriptors were created and used [1, 12]. Some of them have generic purpose, such as the Fourier descriptors [3] or moment invariants [10, 15]. On the other hand, for the specific purpose of classification, several shape descriptors are useful for describing and differentiating a variety of objects: convexity [23], rectangularity [24], linearity [27], symmetry [29], etc. Note that, due to the diversity of shapes, descriptors have applications in various areas such as computer science, medicine, biology, and robotics.
Problem 1
If the actual area occupied by an object can be estimated using the wellknown area formula \(\pi r_1 r_2\), it has a good chance of being an ellipse, or, if \(\pi r^2\)—a circle. Obviously, the questions how to estimate the major and minor ellipse radii \(r_1\), \(r_2\) (or r), or how to formalize “good chance” still need to be answered.
To develop the solution which will solve the presented problem, we consider closely the existing ones, namely the methods which verify whether the set is an ellipsoid, and which differentiate the shapes between ellipses and circles. The work presented in this article aims to partially generalize the methods presented by Žunić et al. in work [31], Žunić and Žunić in [30], and Rosin in [24] which describe circularity and ellipticity measures. The reason to choose these methods is their performance superiority in the case of shape boundary defects compared to the other standard methods, namely, the behavior of these measures (i.e., numerical shape characteristics) can be relatively easily understood and their behavior can be reasonably predicted. The aforementioned articles describe explicit formula which use the first two Hu moments invariants to evaluate how much a planar shape differs from a circle or an ellipse. Detailed information of those measures—of circularity \(\mathcal {C}_H\) and ellipticity \(\mathcal {E}_H\) and \(\mathcal {E}_I\)—are presented in Sect. 2 of this article.
The main result of this work presented in Theorem 2 gives an estimation of (1) in the case of circles and ellipses. This allows to derive the condition to test if a given set is ellipticallike \(\mathcal {E}_N\) or circularlike \(\mathcal {C}_N\). Hence, they can be used as a measure of ellipticity and circularity. Our measures were tested on several examples from [30, 31]. Experiments verify many advantages of our approach, e.g., behavior consistent with the human intuition and its invariance in similarity transformation. Moreover, our measures can be applied to higher dimensional data (see Figs. 9, 10, 11).
This paper is organized as follows. In the next section the foundations for the state of art are introduced. In Sect. 3 we briefly describe the main result of this work with the sketch of the proof. In Sect. 4 we set up notation and terminology for the Kullback–Leibler divergence and crossentropy. In Sect. 5 we provide the formula for circularity and ellipticity measurement. Comments and conclusions can be found in the last section.
2 State of the Art
In this section several most standard measures of circularity and ellipticity are mentioned. Those methods range over (0, 1] and give the measurement equal to 1 if and only if the measured shape is a circle or an ellipse.
Let us consider an arbitrary set \(S\subset \mathbb {R}^2\).
2.1 Circularity Measure
Other examples of methods for measuring the circularity can be found in [7, 13, 14, 22].
2.2 Ellipticity Measure

\(\mathcal {I}_1(S)=m_{2,0}(S)+m_{0,2}(S)\),

\(\mathcal {I}_2(S)=(m_{2,0}(S)m_{0,2}(S))^2+4(m_{1,1}(S))^2\),
3 Main Theorem
In this section we present the main result of this paper that a set S is an ellipsoid if the uniform probability density on it has minimal Kullback–Leibler divergence—a fundamental equation of information theory that quantifies the proximity of two probability distributions (a brief summary and the proof is presented in further part of this work). Using Kullback–Leibler divergence we show that it is enough to know three moments of the object (in \(\mathbb {R}^2\)) to check if the given set is an ellipse.
Theorem 1
 Thenwhere the equality holds if S is an ellipse. If this is the case, then \(S=\mathcal {B}_{\Sigma _S}(\mathrm {m}_S,\sqrt{N+2})\).$$\begin{aligned} \mathcal {E}_N(S):=\frac{\varGamma (N/2+1)}{((N+2)\pi )^{N/2}}\cdot \frac{\lambda _N(S)}{\sqrt{\det (\Sigma _S)}}\le 1, \end{aligned}$$
 Thenwhere the equality holds if S is a circle. If this is the case then \(S=\mathcal {B}(\mathrm {m}_S,\sqrt{\frac{N+2}{N}\mathrm {tr}(\Sigma _S)})\).$$\begin{aligned} \mathcal {C}_N(S):=\frac{\varGamma (N/2+1)}{((N+2)\pi /N)^{N/2}}\cdot \frac{\lambda _N(S)}{(\mathrm {tr}(\Sigma _S))^{N/2}}\le 1, \end{aligned}$$

we first observe that we can restrict to the case when the mean of S is centered at zero and the covariance equals to identity;

next we fit to the data optimal uniform density on a ball \(\mathcal {B}(0,R)\) with R such that the volume of S equals to volume of \(\mathcal {B}(0,R)\);

last we show that if S would contain elements outside of \(\mathcal {B}(0,R)\), then by “moving” those elements inside of \(\mathcal {B}(0,R)\), we would increase the value of the respective Kullback–Leibler divergence.
Under the above definition, parameter \(\mathcal {E}_2\) is invariant to affine transformations, while \(\mathcal {C}_2\) is invariant to isometric transformations (compare with Theorem 3).
Remark 1
If S is an ellipse, then one can easily verify that \(4\pi \sqrt{\det (\Sigma _S)}\) equals its area. Thus, we see that (2) is a realization of the idea given in Problem 1.
Analogously, if S is a circle, then its area equals \(2\pi \mathrm {tr}(\Sigma _S)\), and consequently (3) gives a formalization of an analogue of Problem 1 for circles.
Directly from Theorem 1 (namely, Eqs. (2) and (3)), we can compare the new measures with the measures recalled in state of the art of this article (Sect. 2).
Observation 1

\(\mathcal {C}_2(S)=\mathcal {C}_H(S)\);

\(\mathcal {E}_2(S)\le a \Rightarrow \mathcal {E}_I(S) \le a^2\) for \(a \in (0,1]\).
Proof
Consequently, the authors’ approach for the twodimensional data leads to the same conclusions as indexes \(\mathcal {E}_I\) [24] and \(\mathcal {C}_H\) [31].
Remark 2
The \(\mathcal {C}_{st}\) uses one of the most popular and standard approach to circularity measure which is derived from the relation between the shape of the area and the length of its perimeter. As one can show, which can be observed also in above examples, this measure stabilizes on the octagon where it achieves the highest value. This is caused by that fact that in the calculation of the boundary of given discrete shape we can “move” only according to lines which form the angle which is a multiplicity of \(\pi /4\) with the axis, see Fig. 3c for illustration.
Since this measure was for a long time successfully applied for circle discovery on images, we conclude that from the practical point of view, octagon presents a sufficient numerical approximation of the circle in most commonly encountered application.
4 Kullback–Leibler Divergence and CrossEntropy
4.1 Basic Definitions on Kullback–Leibler Divergence
We now remind the reader of the concept of differential entropy which is the entropy of a continuous random variable [5].
Definition 1
Differential entropy is also related to the shortest description length, and is similar in many ways to the wellknown entropy of a discrete random variable. Since it extends the idea of Shannon entropy, a measure of the expected value of the information in the message, to continuous probability distributions. The value of differential entropy depends only on the probability density of the random variable [5]. In this paper we shall abbreviate differential entropy as entropy.
Lets now calculate differential entropy for simplest density—uniform density.
Example 1
Remark 3
In general case one could consider various densities, not just the uniform one. However in practical applications, Gaussian densities are typically considered, as they are easy to work with. In many practical cases, the methods developed under the assumption that data have normal distribution work quite well even when the density is not normal. Furthermore, the central limit theorem provides a theoretical basis for why it has such a wide applicability. Therefore, this density approximates many natural phenomena so well, and it has developed into a standard of reference for many probability problems. As an excellent example we can refer the reader to [11] where Gaussian distributions were used for modeling contours and have been applied for shape retrieval.
Since in this paper we focus on circular and elliptical shapes, to detect them we could theoretically use densities which have ellipses or circles as level sets. We have decided to use Gaussian ones, while for them we have accurate, explicit, and numerically efficient formulas for the estimation of their parameters.
The differential entropy of Gaussian density is considered in following example.
Example 2
We can now proceed to Kullback–Leibler divergence which is the “cost” associated with selecting a distribution q from distribution family \(\mathbb {Q}\) to approximate the true distribution p [5].
Definition 2
\(D_{KL}\) is nonnegative in p and q, zero if the distributions match exactly and can potentially equal infinity. However, since the Kullback–Leibler divergence is a nonsymmetric information theoretical measure of distance of densities p from q, namely \(D_{KL}(p\Vert q)\not = D_{KL}(q\Vert p)\), it is not strictly a distance metric. However, there are some natural modifications which deal with this problem, e.g., [2, 17].
By introducing next definition—crossentropy, we can simplify the \(D_{KL}(p\Vert q)\) for arbitrary densities p and q.
Definition 3
It is worth specifying that crossentropy is a variant of the entropy definition that allows us to compare two probability distributions for the same random variable. We treat the first argument as the “target” probability distribution and the second as the estimated one for which we are trying to evaluate how well it “fits” the target.
4.2 Kullback–Leibler Divergence Between Uniform and Gaussian Densities
We proceed to comparison of uniform and normal distributions by relative entropy.
We will now show the formula for the Kullback–Leibler divergence of uniform densities.
Observation 2
Proof
Clearly \( D_{KL}(\mathrm {u}_S \Vert \mathcal {G})=H^{\times }(\mathrm {u}_S\Vert \mathcal {G})h(\mathrm {u}_S)= H^{\times }(\mathcal {G}[\mathrm {u}_S]\Vert \mathcal {G})\ln (\lambda _N(S)) =H^{\times }(\mathcal {G}[\mathrm {u}_S]\Vert \mathcal {G}[\mathrm {u}_S])\ln (\lambda _N(S))= h(\mathcal {G}[\mathrm {u}_S])\ln (\lambda _N(S)) \) \( =\frac{N}{2}\ln (2\pi e)+\frac{1}{2}\ln (\det (\Sigma _S))\ln (\lambda _N(S))\). \(\square \)
Example 3
Example 4
5 Optimal Estimations and Main Results
5.1 Basis for the Simple Case
We shall now show that \(d_N\) gives a lower bound on the compression of the uniform densities, namely, we will calculate the mismatch of the optimal model given by the uniform density of \(S\subset \mathbb {R}^N\) and the approximation given by normal densities \(\mathcal {G}\), measured by the Kullback–Leibler divergence (see Definition 3).
Proposition 1
Proof
Clearly, if S is a ball centered at zero such that \(\Sigma _S=\mathrm {I}\), then by the Example 3, we obtain \(S=\mathcal {B}(0,\sqrt{N+2})\).
5.2 Main Results: New Measures
We shall now broaden the previous theorem to a more general case. This will provide the grounds for defining the new measures.
Theorem 2
Proof
Without loss of generality, by applying translation if necessary, we can reduce to the case when \(\mu _S=0\). Next, by applying transformation \(x \rightarrow (\Sigma _S)^{1/2}x\), we reduce the theorem to the case when \(\Sigma _S=\mathrm {I}\). Proposition 1 completes the proof. \(\square \)
Corollary 1
Proof
By considering the family of all spherical Gaussians \(\mathcal {G}_{(\cdot \mathrm {I})}\), that is the Gaussians with covariance proportional to identity, we obtain the formula for Nballs identification.
Corollary 2
Proof
Remark 4
6 The new measure properties in simple illustrations
The following theorem summaries the desirable properties of \(\mathcal {E}_N\) and \(\mathcal {C}_N\).
Theorem 3
 (a)
\(\mathcal {E}_N(S)\in (0,1]\) for all sets S;
 (b)
\(\mathcal {C}_N(S)\in (0,1]\) for all sets S;
 (c)
\(\mathcal {E}_N(S)=1 \Leftrightarrow S \text { is an ellipse}\);
 (d)
\(\mathcal {C}_N(S)=1 \Leftrightarrow S \text { is a Nball}\)(see footnote^{6});
 (e)
\(\mathcal {C}_N\) is invariant with respect to similarity and isometric transformations;
 (f)
\(\mathcal {E}_N\) is invariant with respect to affine transformations.
Proof
Items (a)–(d) follow directly form the Corollaries 1 and 2.
Items (e) and (f) follows from the properties of the covariance matrix.
In the following part of this section the new circularity measure properties are illustrated.
6.1 Nonfrontal View Image Correction
6.2 Noise Resistance
6.2.1 Shape Boundary Noise
Figure 5 illustrates the robustness of \(\mathcal {E}_2\). Presented shapes have similar measured ellipticity even though the last shape has a very high noise level. The noise is added to the shape boundary; thus, the perimeter of the object is increased. This experiment shows that the new measure can cope with such a situation.
6.2.2 Salt and Pepper
In common applications the images which we are working with contain same noise—random, unwanted signal. Figure 13 illustrates the reliability of \(\mathcal {C}_2\) for salt and pepper noise, for which a certain amount of the pixels in the image are either black or white. The percentage level describes the probability of occurrence of such kind of noise. The experiments shows that the covariance matrix which is base component enforces that \(\mathcal {C}_2\) of plays well in such situations.
6.2.3 Missing Values Resistance
6.3 Circle Estimation
Figure 6 presents circularity measure \(\mathcal {C}_2\) for regular polygons from equilateral triangle to dodecagon. The aim of this example is to find the good approximation for a circle. From Theorem 3 we derive that \(\mathcal {E}_2\) reaches 1 only for a perfect circle. Thus, we want to acquire a simple template which can be treated as an approximation of a circle.
First of all, we can confirm that the circularity measure behaves in a natural way—it increases with the number of polygon sides.
Figure 6 shows that for a hexagon a value of 0.9924 is reached, which gives a two decimal places accuracy. Moreover, if a higher precision is needed, a decagon provides an accuracy of three decimal places.
6.4 \(\mathcal {E}_2\) and \(\mathcal {C}_2\) Behavior
Figure 7 presents images ranked with respect to \(\mathcal {C}_2\). Different rank is obtained by measures \(\mathcal {C}_{st}\) and \(\mathcal {C}_H\). This example illustrates how shape changing could lead to differences in the measured circularity. In this case the changes in the measured circularity \(\mathcal {C}_2\) are in accordance with the natural perception of how a circularity measure should behave.
Figure 8 presents the same experiment for \(\mathcal {E}_2\).
We highlight that the values of \(\mathcal {C}_{st}\) and \(\mathcal {C}_H\) were taken from [31], while \(\mathcal {E}_H\) was taken from [30]. Moreover, the values of \(\mathcal {C}_2\) and \(\mathcal {C}_H\) are theoretically equal—compare with Observation 1—the differences are caused by a numerical error. On the other hand, \(\mathcal {E}_2\) and \(\mathcal {E}_H\) are in general not equal—see Fig. 8i.
6.5 3D Shapes

we choose \(\delta >0\);

by taking \(P=S\cap (\delta \mathbb {Z})^3\) we obtain a discrete representation of our shape S;

each point \(x \in P\) is replaced by the cube of side \(\delta \). The center coordinates of such cube \(Q_x\) is the same as replaced point namely \(\mu (Q_x)=x\) for same \(x\in P\). We put \(Q=\cup _{x\in P}Q_x\);
 we calculate the circularity and the ellipticity for obtained shape by equations from Corollaries 1 and 2 as follows:$$\begin{aligned} \mathcal {E}_3(Q)= & {} \frac{3\sqrt{5}}{100\pi }\cdot \frac{\mathrm {card}P \cdot \delta ^3}{ \sqrt{\det (\Sigma _P+\frac{1}{12}\delta ^5\mathrm {I})}},\\ \mathcal {C}_3(Q)= & {} \frac{9\sqrt{15}}{500\pi }\cdot \frac{\mathrm {card}P \cdot \delta ^3}{\mathrm {tr}(\Sigma _P)+\frac{1}{4}\delta ^5}. \end{aligned}$$
Figures 9 and 10 present examples of a sphere and an ellipsoid, respectively. The ellipticity and circularity measure increases with the approximation accuracy. It can thus be concluded that the behavior of the new measure is natural even in higher dimensions.
Figure 11 presents the situation for a shape with a hole. Both measures respond to this defect correctly and the calculated value is low (Figs. 12, 13, 14).
7 Conclusions
The authors have placed their research efforts in the field of pattern recognition to establish a new measure of circularity and ellipticity based on moments. The proposed measure works in arbitrary dimensions, so we can test, e.g., for Nsquares. The theoretical background and the proof that the conditions are well defined are also presented in this work.
This approach can be treated as a generalization of the measures \(\mathcal {C}_{st}\) and \(\mathcal {E}_I\) mentioned in Sect. 2. However, the authors’ approach can be applied in arbitrary dimensions (see Sect. 6).
The fact that circles and ellipses maximize the above invariant has enabled the authors to introduce a new circularity \(\mathcal {C}_N\) and ellipticity \(\mathcal {E}_N\) measure defined in Corollaries 1 and 2. It is shown that \(\mathcal {C}_N\) and \(\mathcal {E}_N\) range over the interval (0, 1] and equal 1 if and only if the investigated set is, respectively, circle or ellipse.

behavior consistent with the intuition;

invariance for similarity transformations;

applicability in higher dimensions;

allows a simpler description of any given object;

the calculation time is significantly reduced.
Footnotes
 1.
If set S is discrete, namely \(S=\{x_i\}_{i=1}^N\), then the covariance matrix equals \( \Sigma _S=\frac{1}{N}\sum _{i=1}^N(x_i\mu )(x_i\mu )^T, \) where \(\mu =\frac{1}{N}\sum _{i=1}^Nx_i\) is a mean of set S. In the general case by the covariance matrix of a set, we understand the covariance of the uniform normalized density on S.
 2.
If a set S is a discrete subset of \(\mathbb {Z}^N\), then in practical considerations we view S a discrete representation of the set \(S_1=S+[1/2,1/2]^N\). Thus, the value \(\lambda (S_1)\) equals \(\mathrm {card}(S)\) and the covariance matrix \(\Sigma _{S_1}\) equals \(\Sigma _S+\frac{1}{12} I\).
 3.
By \(\varGamma (x)\), for \(x>0\), we denote the Gamma function which is an extension of the factorial function.
 4.
We measure the boundary length as a sum of distance of consecutive middle points of the squares which form the boundary of the set, see Fig. 3.
 5.
According to the values of \(\mathcal {C}_N\) and \(\mathcal {E}_N\) we can decide if a given object is more like circle (both measures gets a high values) or ellipse (ellipticity measure has high value). By this knowledge we can observe/investigate, e.g., movement on the whole object by the simpler approach—circle or ellipse, respectively (compare with Fig. 1).
 6.
With accuracy to the set of Lebesgue measure zero.
Notes
Acknowledgments
This work was supported by the Polish National Centre of Science Grant No. 2012/07/N/ST6/02192.
References
 1.Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002)CrossRefGoogle Scholar
 2.Bennett, C.H., Gcs, P., Li, M., Vitnyi, P., Zurek, W.: Information distance. IEEE Trans. Inform. Theory 44(4), 1407–1423 (1998)MathSciNetCrossRefGoogle Scholar
 3.Bowman, E.T., Soga, K., Drummond, W.: Particle shape characterisation using fourier descriptor analysis. Geotechnique 51(6), 545–554 (2001)CrossRefGoogle Scholar
 4.Chiang, C.C., Ho, M.C., Liao, H.S., Pratama, A., Syu, W.C.: Detecting and recognizing traffic lights by genetic approximate ellipse detection and spatial texture layouts. Int. J. Innov. Comput. Inf. Control 7(12), 6919–6934 (2011)Google Scholar
 5.Cover, T., Thomas, J.: Elements of Information Theory. Wiley, New York (2006)MATHGoogle Scholar
 6.Cox, E.: A method of assigning numerical and percentage values to the degree of roundness of sand grains. J. Paleontol. 1(3), 179–183 (1927)Google Scholar
 7.Di Ruberto, C., Dempster, A.: Circularity measures based on mathematical morphology. Electron. Lett. 36(20), 1691–1693 (2000)CrossRefGoogle Scholar
 8.Fitzgibbon, A., Pilu, M., Fisher, R.: Direct least square fitting of ellipses. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 476–480 (1999)CrossRefGoogle Scholar
 9.Flusser, J., Suk, T.: Pattern recognition by affine moment invariants. Pattern Recognit. 26(1), 167–174 (1993)MathSciNetCrossRefGoogle Scholar
 10.Flusser, J., Zitova, B., Suk, T.: Moments and Moment Invariants in Pattern Recognition. Wiley, New York (2009)CrossRefMATHGoogle Scholar
 11.Liu, Meizhu, et al.: Shape retrieval using hierarchical total Bregman soft clustering. IEEE Trans. Pattern Anal. Mach. Intell. 34(12), 2407–2419 (2012)CrossRefGoogle Scholar
 12.Gorelick, L., Galun, M., Sharon, E., Basri, R., Brandt, A.: Shape representation and classification using the poisson equation. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 1991–2005 (2006)CrossRefGoogle Scholar
 13.Haralick, R.M.: A measure for circularity of digital figures. IEEE Trans. Syst. Man Cybern. 4, 394–396 (1974)CrossRefMATHGoogle Scholar
 14.HerreraNavarro, A.M., JiménezHernández, H., TerolVillalobos, I.R.: A probabilistic measure of circularity. In: Combinatorial Image Analaysis, pp. 75–89. Springer (2012)Google Scholar
 15.Hu, M.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)CrossRefMATHGoogle Scholar
 16.Ji, Q.: 3D face pose estimation and tracking from a monocular camera. Image Vision Comput. 20(7), 499511 (2002)CrossRefGoogle Scholar
 17.Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)MathSciNetCrossRefMATHGoogle Scholar
 18.Mahalanobis, P.: On the generalized distance in statistics. In: Proceedings of the National Institute of Sciences of India, Vol. 2, pp. 49–55. New Delhi (1936)Google Scholar
 19.Misztal, K., Tabor, J.: Mahalanobis distancebased algorithm for ellipse growing in iris preprocessing. In: Computer Information Systems and Industrial Management, pp. 158–167. Springer, Berlin (2013)Google Scholar
 20.OpenStax College (Wikipedia Commons): Illustration from Anatomy & Physiology, Connexions. http://cnx.org/content/col11496/1.6/ (red channel). In Wikipedia https://commons.wikimedia.org/wiki/File:1911_Sickle_Cells.jpg (2013). Accessed 19 June 2013
 21.Peura, M., Iivarinen, J.: Efficiency of simple shape descriptors. In: Aspects of Visual Form, pp. 443–451 (1997)Google Scholar
 22.Proffitt, D.: The measurement of circularity and ellipticity on a digital grid. Pattern Recogn. 15(5), 383–387 (1982)CrossRefGoogle Scholar
 23.Rahtu, E., Salo, M., Heikkila, J.: A new convexity measure based on a probabilistic interpretation of images. IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1501–1512 (2006)CrossRefGoogle Scholar
 24.Rosin, P.L.: Measuring shape: ellipticity, rectangularity, and triangularity. Mach. Vis. Appl. 14(3), 172–184 (2003)CrossRefGoogle Scholar
 25.Rosin, P.L., Žunić, J.: Measuring squareness and orientation of shapes. J. Math. Imaging Vis. 39(1), 13–27 (2011)Google Scholar
 26.Sonka, M., Hlavac, V., Boyle, R., et al.: Image Processing, Analysis, and Machine Vision. PWS Publication, New York (1999)Google Scholar
 27.Stojmenović, M., Nayak, A., Zunic, J.: Measuring linearity of planar point sets. Pattern Recogn. 41(8), 2503–2511 (2008)CrossRefMATHGoogle Scholar
 28.Tabor, J., Spurek, P.: Crossentropy clustering. Pattern Recogn. 47(9), 3046–3059 (2014)CrossRefGoogle Scholar
 29.Zabrodsky, H., Peleg, S., Avnir, D.: Symmetry as a continuous feature. IEEE Trans. Pattern Anal. Mach. Intell. 17(12), 1154–1166 (1995)CrossRefGoogle Scholar
 30.Žunić, D., Žunić, J.: Shape ellipticity from Hu moment invariants. Appl. Math. Comput. 226, 406–414 (2014)MathSciNetCrossRefMATHGoogle Scholar
 31.Žunić, J., Hirota, K., Rosin, P.L.: A Hu moment invariant as a shape circularity measure. Pattern Recogn. 43(1), 47–57 (2010)CrossRefMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.