Graph Clustering, Variational Image Segmentation Methods and Hough Transform Scale Detection for Object Measurement in Images
 893 Downloads
 3 Citations
Abstract
We consider the problem of scale detection in images where a region of interest is present together with a measurement tool (e.g. a ruler). For the segmentation part, we focus on the graphbased method presented in Bertozzi and Flenner (Multiscale Model Simul 10(3):1090–1118, 2012) which reinterprets classical continuous Ginzburg–Landau minimisation models in a totally discrete framework. To overcome the numerical difficulties due to the large size of the images considered, we use matrix completion and splitting techniques. The scale on the measurement tool is detected via a Hough transformbased algorithm. The method is then applied to some measurement tasks arising in realworld applications such as zoology, medicine and archaeology.
Keywords
Graph clustering Discrete Ginzburg–Landau functional Image segmentation Scale detection Hough transform1 Introduction
Image segmentation denotes the task of partitioning an image in its constituent parts. Featurebased segmentation looks at distinctive characteristics (features) in the image, grouping similar pixels into clusters which are meaningful for the application at hand. Typical examples of features are based on greyscale/RGB intensity and texture. Mathematical methods for image segmentation are mainly formalised in terms of variational problems in which the segmented image is a minimiser of an energy. The most common image feature encoded in such energies is the magnitude of the image gradient, detecting regions (or contours) where sharp variations of the intensity values occur. Examples include the Mumford–Shah segmentation approach [51], the snakes and geodesic active contour models [16, 38]. Moreover, in [18], Chan and Vese proposed an instance of the Mumford–Shah model for piecewise constant images whose energy is based on the mean greyvalues of the image inside and outside of the segmented region rather than the image gradient and hence does not require strong edges for segmentation. The Chan–Vese model has been extended for vectorvalued images such as RGB images in [19]. Other image segmentation methods have been considered in [26, 39]. They rely on the use of the total variation (TV) seminorm [5], which is commonly used for image processing tasks due to its properties of simultaneous edge preservation and smoothing (see [55]).
The nonsmoothness of most of the segmentation energies renders their numerical minimisation usually difficult. In the case of the Mumford–Shah segmentation model, the numerical realisation is additionally complicated by its dependency on the image function as well as the object contour. To overcome this, several regularisation methods and approximations have been proposed in the literature, e.g. [4, 10, 11, 66] for Mumford–Shah segmentation. In the context of TVbased segmentation models, the Ginzburg–Landau functional has an important role. Originally considered for the modelling of physical phenomena such as phase transition and phase separation (cf. [12] for a survey on the topics), it is used in imaging for approximating the TV energy. Some examples of the use of this functional in the context of image processing are [24, 25, 26], which relate to previous works by Ambrosio and Tortorelli [4, 5] on diffuse interface approximation models.
Such variational methods for image segmentation have been extensively studied from an analytical point of view, and the segmentation is usually robust and computationally efficient. However, variational image segmentation as described above still faces many problems in the presence of low contrast and the absence of clear boundaries separating regions. Their main drawback is that they are limited to image features which can be mathematically formalised (e.g. in terms of an image gradient) and encoded within a segmentation energy. In recent years, dictionarybased methods have become more and more popular in the image processing community, complementing more classical variational segmentation methods. By learning the distinctive features of the region to be segmented from examples provided by the user, these methods are able to segment the desired regions in the image correctly.
In this work, we consider the method proposed in [9, 28, 42, 43] for image segmentation and labelling. This approach goes beyond the standard variational approach in two respects. Firstly, the model is set up in the purely discrete framework of graphs. This is rather unusual for variational models where one normally considers functionals and function spaces defined on subdomains of \({\mathbb {R}}^2\) in order to exploit properties and tools from convex and functional analysis and calculus of variations. Secondly, the new framework allows for more flexibility in terms of the features considered. Additional features such as texture, light intensity can be considered as well without encoding them in the function space or the regularity of the functions. Due to the possibly very large size of the image (nowadays of the order of megapixel for professional cameras) and the large number of features considered, the construction of the problem may be computationally expensive and often requires reduction techniques [27, 52, 53]. In several papers (see, for example, [33, 59, 61]), the segmentation problem was rephrased in the graph framework by means of the graph cut objective function. Followup works on the use of graphbased approaches are, for instance, [44, 45] where an iterative application of heat diffusion and thresholding, also known as the Merriman–Bence–Osher (MBO) method [46], is discussed for binary image labelling, and [36] where the Mumford–Shah model is reinterpreted in a graph setting.
A similar challenge can be encountered in medical applications monitoring and quantifying the evolution of skin moles for early diagnosis of melanoma (skin cancer). A normally userdependent measurement of the mole is performed using a ruler located next to it. A picture is then taken and used for future comparisons and followup; see Fig. 3 and compare [1, 17] for previous attempts of automatic detection of melanomas. For such an application, a systematic quantitative analysis is also required.^{1}
Outline of the method We consider the image as a graph whose vertices are the image pixels. Similarity between pixels in terms of colour or texture features is modelled by a weight function defined on the set of vertices. Our method runs as follows. Firstly, using examples provided by the user (dictionaries) as well as matrix completion and operator splitting techniques, the segmentation of the region of interest is performed. In the graph framework, this corresponds to clustering together pixels having similar features. This is obtained by minimising on the graph the Ginzburg–Landau functional typically used in the continuum setting to describe diffuse interface problems. In order to provide quantitative measurements of the segmented region, a second detection step is then performed. The detection here aims to identify the distinctive geometrical features of the measurement tool (such as line alignment for rulers or circularity for circles) to get the scale on the measurement tool considered. The segmented region of interest can now be measured by simple comparisons, and quantitative measurements such as perimeter and area can be provided.
Contribution We propose a selfcontained programme combining automated detection and subsequent size measurement of objects in images where a measurement tool is present. Our approach is based on two powerful image analysis techniques in the literature: a graph segmentation approach which uses a discretised Ginzburg–Landau energy [9] for the detection of the object of interest and the Hough transform [35] for detecting the scale of the measurement tool. While these methods are state of the art, their combination for measuring object size in images proposed in this paper is new. Moreover, to our knowledge there is only little contribution in the literature that broach the issue of how the graph segmentation approach and the Hough transform are applied to specific problems [28, 32, 43]. Indeed, here we present these methodologies in detail, especially discussing important aspects in their practical implementation, and demonstrate the robust applicability of our programme for measuring the size of objects, showcasing its performance on several examples arising in zoology, medicine and archaeology. Namely, we first apply our combined model for the measurement of the blaze on the forehead of male pied flycatchers, for which we run a statistical analysis on the accuracy and predicted error in the measurement on a database of thirty images. Stateoftheart methods for such a task typically require the user to fit polygons inside or outside the blaze [54] or to segment the blaze by hand [50]. Similarly, the scale on the measurement tool is typically read from the image by manually measuring it on the ruler. With respect to medical applications, we apply our combined method for the segmentation and measurement of melanomas. Although efficient segmentation methods for automatised melanoma detection already exist in the literature (see, for example, [1, 17]), to the knowledge of the authors, no previous methods providing their measurement by detecting the scale on the ruler placed next to them (see Fig. 3) exist. Conversely, in the case of archaeological applications, some models for the automatic detection of the measurement tool in the image exist [34], but no automatic methods are proposed for the segmentation of the region of interests. A free release of the MATLAB code used to compute the results will be made available after the zoological analysis of the pied flycatcher’s data based on our segmentation, and measurement has been completed [14].
Organisation of the paper In Sect. 2, we present the mathematical ingredients used for the design of the graphbased segmentation technique used in [9, 28, 42, 43]. They come from two different worlds: the framework of diffusion PDEs used for modelling phase transition/separation problems (see Sect. 2.1) and graph theory and clustering, see Sect. 2.2. In view of a detailed numerical explanation, we also recall a splitting technique and a popular matrix completion technique used in our problem to overcome the computational costs. In Sect. 3, we explain how the geometrical Hough transform is used to detect the scale in an image. Finally, Sect. 4 contains the numerical results obtained with our combined method applied to the problems described above. For completeness, we give some details on the Nyström matrix completion technique in ‘Appendix 1’ and a review of the Hough transform for line and circle detection in ‘Appendix 1’.
2 Image Segmentation as Graph Clustering
We present in this section the mathematical background for the design of the Ginzburg–Landau (GL) graph segmentation algorithm introduced in [9]. There, the image segmentation problem is rephrased as a minimisation problem on a graph defined by features computed from the image. Compared to the methods above, the graph framework allows for more freedom in terms of the possible features used to describe the image, such as texture.
2.1 The Ginzburg–Landau Functional as Approximation of TV
In the following, we recall the main properties of the original continuum version of the GL functional explaining its importance in the context of image segmentation problems as well as the main concepts of graph theory which will be used for the segmentation modelling.
2.2 Towards the Modelling: The Graph Framework
In the following, we rely on the method presented in [9, 42] for highdimensional data classification on graphs, which has been applied to several imaging problems [28, 43], showing good performance and robustness. We consider the problem of binary image segmentation where we want to partition a given image into two components where each component is a set of pixels (also called a cluster, or a class) and represents a certain object or group of objects. Typically, some a priori information describing the object(s) we want to extract is given and serves as initial input for the segmentation algorithm. For image labelling, in [9] two images are taken as input: the first one has been manually segmented in two classes and the objective is to automatically segment the second image using the information provided by the segmentation of the first one.
In operator form, the weight matrix \(W\in {\mathbb {R}}^{S\times S}\) is the nonnegative symmetric matrix whose elements are \(w_{i,j}=w(x_i,x_j)\). In the following, we will not distinguish between the two functions w and \(\hat{w}\) and, with a little abuse of notation, we will write \(w(z_i,z_j)\) for \(\hat{w}(z_i,z_j)\).
Remark 1
Weight functions express the similarities between vertices and will be used in the following to partition V into clusters such that the sum of the edge weights between the clusters is small. There are many different mathematical approaches to attempt this partitioning. When formulated as a balanced cut minimisation, the problem is NPcomplete [67], which inspired relaxations which are more amenable to computational approaches, many of which are closely related to spectral graph theory [59]. We refer the reader to [20] for a monograph on the topic. The method we use in this paper can be understood (at least in spirit, if not technically, [63, 64]) as a nonlinear extension of the linear relaxed problems.
To solve the segmentation problem, we minimise a discrete GL functional (which is formulated in the graph setting, instead of the continuum setting), via a gradient descent method similar to the one described in Sect. 2.1. In particular, in this setting the Laplacian in (2.3) will be a (negative) normalised graph Laplacian. We will use the spectral decomposition of u with respect to the eigenfunctions of this Laplacian. In Sect. 2.4, we discuss the Nyström method, which allows us to quickly compute this decomposition, but first we introduce the graph Laplacian and graph GL functional.
The discrete operators We start from the definition of the differential operators in the graph framework.
A subset A of the vertex set V is connected if any two vertices in A can be connected by a path (i.e. a sequence of vertices such that subsequent vertices are connected by an edge in E) such that all the vertices of the path are in A. A finite family of sets \(A_1,\ldots ,A_t\) is called a partition of the graph if \(A_i\cap A_j=\emptyset \) for \(i\ne j\) and \(\bigcup _i A_i=V\).
Remark 2
The operator L has S nonnegative realvalued eigenvalues \(\left\{ \lambda _i\right\} _{i=1}^S\) which satisfy: \(0=\lambda _1\le \lambda _2\le \cdots \le \lambda _S\). The eigenvector corresponding to \(\lambda _1\) is the constant Sdimensional vector 1 \(_S\), see [67].
In [59, Section 5], a quick review on the connections between the use of the symmetric graph Laplacian (2.8) and spectral graph theory is given. Computing the eigenvalues of the normalised symmetric Laplacian corresponds to the computation of the generalised eigenvalues used to compute normalised graph cuts in a way that the standard graph Laplacian may fail to do, compare [20]. Typically, spectral clustering algorithms for binary segmentation base the partition of a connected graph on the eigenvector corresponding to the second eigenvalue of the normalised Laplacian, using, for example, kmeans. For further details and a comparison with other methods, we refer the reader to [59] and to [9, Section 2.3] where a detailed explanation on the importance of the normalisation of the Laplacian is given.
Remark 3
Several approaches to SSL using graph theory have been considered in the literature, compare [21, 30]. The approach presented here adapts fast algorithms available for the efficient minimisation of the continuous GL functional to the minimisation of the discrete one in (2.9). In particular, to overcome the high computational costs, we present in the following an operator splitting scheme and a matrix completion technique applied to our problem.
2.3 Convex Splitting
 At each time step \(n\Delta t, n\ge 1\), consider at every point the spectral decomposition of \(U_n\) with respect to the eigenvectors \(v_k\) of the operator \(L_s\) aswith coefficients \(\alpha _n\). Similarly, use spectral decomposition in the \(\left\{ v_k\right\} \) basis for the other nonlinear quantities appearing in (2.13).$$\begin{aligned} U_n(x)=\sum _{k} \alpha _n^k(x) v_k(x),\quad x\in V \end{aligned}$$(2.15)

Having fixed the basis of eigenfunctions, the numerical approximation in the next time step \(U_{n+1}\) is computed by determining the new coefficients \(\alpha ^k_{n+1}\) in (2.15) for every k through convex splitting (2.13).
2.4 Matrix Completion via Nyström Extension
Following the detailed discussion in [9, Section 3.2], we present here the Nyström technique for matrix completion [53] used in previous works by the graph theory community [7, 27] and applied later to several imaging problems [44, 45, 52]. In our problem, the Nyström extension is used to find an approximation of the eigenvectors \(v_k\) of the operator \(L_s\). We will freely switch between the representation of eigenvectors (or eigenfunctions) as a realvalued functions on the vertex set V and as a vectors in \({\mathbb {R}}^S\).
2.5 Pseudocode
We will now give further details. First, we randomly select L pixels from I. As described in Sect. 2.2, we now create a vertex set \(V\cong I\), which we partition into a set X, consisting of the vertices corresponding to the L randomly chosen pixels, and a set \(Y:= V {\setminus } X\). We now compute the feature vectors of each vertex in V. If I is a greyscale image, we can represent features by an intensity map \(f:V \rightarrow {\mathbb {R}}\). If I is an RGB colour image instead, we use a vectorvalued (red, green, and blue) intensity map \(f:V\rightarrow {\mathbb {R}}^3\) of the form \(f(x)=(f_{{R}}(x), f_{{G}}(x), f_{{B}}(x))\). We mirror the boundary to define neighbourhoods also on the image edges. The feature function \(\psi : V \rightarrow {\mathbb {R}}^K\) concatenates the intensity values in the neighbourhood \(\nu (x)\) of a pixel into a vector: \(\psi (x) := (f(\nu _1(x)), \ldots , f(\nu _{\tilde{\tau }}(x)))^\mathrm{T}\), where \(\nu (x)=(\nu _1(x),\ldots ,\nu _{\tilde{\tau }}(x))\in {\mathbb {R}}^{\tilde{\tau }}\) is the neighbourhood vector of \(x\in V\) defined in Sect. 2.2 and \(\tilde{\tau }= (2\tau +1)^2\), the size of the neighbourhood of x. Note that \(K=\tilde{\tau }\) if I is a greyscale image and \(K=3\tilde{\tau }\) if I is an RGB colour image.
Additional features can be considered, such as texture, for instance. For instance, we consider the eight MR8 filter responses [65] as texture features on a greyscale image and choose the function \(t:V\rightarrow {\mathbb {R}}^8\) as \(t(x) = (\text {MR8}_1(x), \ldots , \text {MR8}_8(x))\). Hence, the feature function \(\psi \) is now defined as \(\psi (x):=(t(\nu _1(x)),\ldots ,t(\nu _{\tilde{\tau }}(x)))^\mathrm{T}\), where \(\nu (x)\) and \(\tilde{\tau }\) are defined as above. Here, \(K=8\tilde{\tau }\). Of course, a combination of colour and texture features can be considered as well by considering \(\psi \) defined as \(\psi (x):=(f(\nu _1(x)),t(\nu _1(x)),\ldots ,f(\nu _{\tilde{\tau }}(x)),t(\nu _{\tilde{\tau }}(x)))\) for every x in V. In this case, when dealing with RGB colour images, the dimension of the feature vector is therefore \(K=11\tilde{\tau }\).
3 Hough Transform for Scale Detection
In order to detect objects in an image with specific, a priori specified shapes, in the following we will make use of the Hough transform. For our purposes, we will focus in particular on straight lines detection (for which the Hough transform was originally introduced and considered [35]) and circles [23]. Other applications of this transformation for more general curves exist as well. In [8, 41], the Hough transform is used in the context of astronomical and medical images for a specific class of curves (Lamet and Watt curves). In [32], applications to cellular mitosis are presented. There, the Hough transform recognises the cells (as circular/elliptical objects) and tracks them in the process of cellular division. For more details on the use of the Hough transform for line and circle detection, we refer the interested reader to ‘Appendix 3’.
Numerical strategy Hough transform methods for edge detection are usually applied to binary images. Therefore, we start by using the classical Canny method for edge detection [15] in which we replace the original preliminary Gaussian filtering by an edgepreserving total variation smoothing [55] which has the advantage of removing noise while preserving edges. This step will result in a binary image for the most prominent edges in the image. Having decided which geometrical shape we are interested in (and, as such, its general parametric representation), the corresponding parameter space is subdivided into accumulator arrays (cells) whose dimension depends on the dimension of the parameter space itself (2D in the case of straight lines, 3D in the case of circles). Each accumulator array groups a range of parameter values. The accumulator array is initialised to 0 and incremented every time an object in the parameter space passes through the cell. In this way, one looks for the peaks over the set of accumulator arrays as they indicate a high value of intersecting objects for a specific cell. In other words, they are indicators of potential objects having the specific geometrical shape we are interested in.
3.1 Pseudocode
4 Method, Numerical Results and Applications
We report in this section the numerical results obtained by the combination of the methods presented for the detection and quantitative measurement of objects in an image.
To avoid confusion, we will distinguish in the following between two different meanings of scale. Namely, by image scale we denote the proportion between the real dimensions (length, width) of objects in the image and their corresponding dimensions quantified in pixel count. Dealing with measurement tools, we talk about measurement scale to intend the ratio between a fixed unit of measure (mm or cm) marked the measurement tool considered and the correspondent number of pixels on the image.
4.1 Male Pied Flycatcher’S Blaze Segmentation
Here we present the numerical results obtained by applying Algorithms 1 and 2 to the male pied flycatcher blaze segmentation and measurement problem described in the introduction. Our image database consists of 32 images of individuals from a particular population of flycatchers. Images are \(3648 \times 2736\) pixels and have been taken by a Canon 350D camera with Canon zoom lens EFD 18–55 mm, see Fig. 1. In each image, one of two types of measurement tool is present: a standard ruler or a surface on which two concentric circles of fixed diameter (1 cm the inner one, 3 cm the outer one) are drawn. In the following, we will refer to these tools as linear and circular ruler, respectively. Here, the measurement scale corresponds to the distance between ruler marks for linear rulers and to the radius of the inner circles for circular rulers.
Figure 1 shows clearly that the scale of the images in the database may vary significantly because of the different positioning of the camera in front of the flycatcher. In order to study possible correlations between the dimensions (i.e. perimeter, area) of the blaze and significant behavioural factors, the task then is to segment the blaze and detect automatically the scale of the image considered to provide scaleindependent measurements.
Parameter choice for Algorithm 1 The GL segmentation method exploits similarities and differences between pixels in terms of RGB intensities and texture within their neighbourhood. In our image database, these similarities and differences are very distinctive and will guide the segmentation step. Recalling Sect. 2.5, we note that some parameters need to be tuned for the graph GL minimisation. Those are the number L of Nyström points, the variance \(\sigma \) of the similarity function (2.8), the GL parameter \(\varepsilon \) and the parameter C for the convex splitting (2.12). However, in our numerical experiments we had to tune these parameters only once. Namely, regarding the choice of L for both the head and blaze segmentation, we used values not bigger than \(5\,\%\) of the total size of the image considered. The variance appearing in the similarity function (2.10) was set to \(\sigma ^2=20\), and the weighting parameter \(\varepsilon \) was chosen as \(\varepsilon =0.01\) (a smaller choice would create numerical instabilities), and we set the convexity parameter \(C=25\) or larger in order to guarantee the convexity of the functional appearing in (2.12b).
Parameter choice for Algorithm 2 We briefly comment also on the choice of the parameters for the Hough transform, that is Algorithm 2. Depending on the type of measurement tool considered (linear or circular ruler), different parameter selection methods are considered. In the case of linear rulers: for the longest line detection (i.e. ruler edge identification) the parameters \(obj_{max}\) and thresh were set to 1 and to \(85\,\%\) of the maximum value of the Hough transform matrix, respectively; for the detection of the ruler notches, the same parameters were chosen as \(obj_{max}=500\) and \(thresh=20\,\%\) of the maximum value of the Hough transform matrix were used. As discussed in 3.1, the range \([s_{min}, s_{max}]\) was chosen based on previously collected average data on the head diameter. In particular, after the head detection step, the number of pixels corresponding to the diameter of the head was automatically computed by means of the option EquivDiam of the MATLAB routine regionprops and compared with the average measurement of 1.51 cm provided by available databases on pied flycatcher. In this way, an initial, rough estimate of the ruler scale is found and used to determine a spacing parameter s and the interval \([s_{min}, s_{max}]\) by setting \(s_{min}=s/2\) and \(s_{max}=2s\). This range serves as a suppression neighbourhood: once a peak in the Hough transform matrix (i.e. a line or a circle) is identified, starting from it the successive peaks found outside this range are set to 0 (i.e. possible lines or circles within this interval are discarded), while only the ones inside the range (typically, the following line/circle we want to detect) are kept. For our problem, this corresponds to identifying as candidates for ruler notches only the lines away from each other at least \(s_{min}\) and at most \(s_{max}\) from the peak which has been previously identified. Analogously, the same can be done with circular rulers, where we recall the inner/outer radii are 1 and 3 cm, respectively. In this case, \(obj_{max}=2\) since the circular ruler is made of only two concentric circles.
Comparison with Chan–Vese model Due to the very irregular contours and the fine texture on the flycatcher’s forehead, standard variational segmentation methods such as Canny edge detection or Mumford–Shah models [4, 10, 11, 51] are not suitable for our task, as preliminary tests showed. Chan–Vese [18] is not suitable either, mainly because of the small scale detection limits, the dependence on the initial condition and the parameter sensitivity which may prevent us from an automatic and accurate segmentation of the tiny, yet characteristic feathers composing the blaze. In particular, the optimal parameters \(\mu \) and \(\nu \) appearing in the Chan–Vese functional and a sufficiently accurate initial condition need to be chosen typically by trial and error for every image at hand.
4.1.1 Detailed Description of the Method
 1.
For a given unsegmented image, we detect the head of the pied flycatcher through a comparison with a userprepared dictionary (see Fig. 6) using GL segmentation Algorithm 1. Further computations are restricted to the head only.
 2.
Starting from the reduced image, a second step similar to Step 1 is now performed for the segmentation of the blaze, using again Algorithm 1. A dictionary of blazes is used an extended set of features is considered.
 3.
A refinement step is now performed in order to reduce the outliers detected in the process of segmentation.
 4.
We use the Hough transformbased Algorithm 2 to detect in the image objects with a priori known geometrical shape (lines for linear rulers, circles for circular rulers) for the computation of the measurement scale.
 5.
The final step is the measurement of the values we are interested in (i.e. the perimeter of the blaze, its area and the width and height of the different blaze components). In the case of linear rulers, our results are given up to some error (due to the uncertainty in the detection of the measurement scale computed as average between ruler marks distances).
In the following, we give more details about each step.
The main computational difficulties in this step are due to size of the images considered. This may affect the performance of the algorithm as in order to apply the Nyström completion technique described in Sect. 2.4 one has to choose an adequate number of points whose features will approximate well the whole matrix. The larger and the more heterogeneous the image is, the larger will be the number of points needed to produce a sensible approximation. We circumvent this issue noticing that at this stage of the algorithm, we only need a rough detection of the head which will be used in the following for the accurate segmentation step. Thus, downscaling the image to a lower resolution (in our practice, reducing the resolution by ten times the original one) allows us to use a small number of Nyström sample points (typically 150–200) to produce an accurate result.
Remark 4
(Robustness to noise) In order to reproduce the more realistic situation of images suffering from noise, we artificially added Gaussian noise with zero mean and different variances to some of the images in our database and performed the three analysis steps of our method. We report in Fig. 12, the results corresponding to two noise variances \((\sigma _1^2=0.02, \sigma _2^2=0.05)\). The presence of noise influences both the head and blaze segmentation only slightly; the combination of RGB and texture features extracted in the neighbourhood of each point combined with the comparison to the dictionary makes the algorithm robust to noise and allows for an accurate blaze segmentation even in the noisy case.
Remark 5
(Comparison with MBO segmentation) We compare the blaze segmentation results obtained by minimising the discrete GL functional with the ones obtained using the segmentation algorithm considered in [44] as a variant of the classical Merriman–Bence–Osher (MBO) scheme [46]. More details on the connections between this approach and the GL minimisation as well as some insights into its numerical realisation are given in ‘Appendix 2’. Following faithfully what is described in Sects. 2.2 and 2.4 for the graph and the operator construction step, respectively, we implemented the MBO segmentation algorithm following [44, Section 2]. We remark that the MBO method has the advantage of eliminating the dependence on the interface parameter \(\varepsilon \) of the GL functional by means of a combination of heat diffusion and a thresholding step. Instead of \(\varepsilon \), the heat diffusion time \(\tau \) needs to be chosen. In our numerical implementation, we used \(\tau =0.005\). Since no convex splitting strategies are required in this case, due to the absence of the nonconvex doublewell term, standard Fourier transform methods are used to solve the resulting timestepping scheme. In Fig. 13, we report the blaze segmentation results obtained after applying a refinement step similar to the one described above: we note that a segmentation result comparable to the ones shown in Fig. 11 is obtained. Moreover, robustness to noise is observed also in this case. In terms of computational times, we observed that the replacement of the GL minimisation step with the MBO one did not affect significantly the speed of the segmentation algorithm.
Outliers removal for linear rulers The scale detection step described above may miss some lines on the ruler. This can be due to an oversmoothing in the denoising step, to high threshold values for edge detection or also to the choice of a large spacing parameter. Furthermore, as we can see from Figs. 1 and 15, we can reasonably assume that the ruler lies on a plane, but its bending can distort some distances between lines. Moreover, few other false line detections can occur (like the number 11 marked on the ruler main body in Fig. 15). To exclude these cases, we compute the distance (in pixels) between all the consecutive lines detected and eliminate possible outliers using the standard interquartile range (IQR) formula [62] for outliers’ removal. Indicating by \(Q_1\) and \(Q_3\) the lower quartile and the third quartile, an outlier is every element not contained in the interval \([Q_11.5*(Q_3Q_1), Q_3+1.5*(Q_3Q_1)]\). Finally, we compute the empirical mean, variance and standard deviation (SD) of the values within this range, thus getting a final indication of the scale of the ruler together with an indicator of the precision of the method.
Precision of the measurement scale detection for linear rulers on a sample of 30 images
SD min  SD max  Mean SD  RSD min  RSD max  Mean RSD 

4.01 pixels  10.67 pixels  6.81 pixels  \(6.59\,\%\)  \(17.36\,\%\)  \(11.99\,\%\) 
Comparison between ruler scale detection by using manual IMAGEJ line tool and our Hough Transform (HT) method with corresponding measurements of the segmented blaze area obtained by using ImageJ ‘magicwand’ (MW) tool [50], trapezium fitting (Trap.) [54] (see also Fig. 2) and the graph Ginzburg–Landau (GL) minimisation
Despite these variabilities, our method is a flexible and semisupervised approach for this type of problem. Further tests on the whole set of images and improvements on its accuracy are a matter of future research. The analysis of the resulting data measurements for the particular problem of flycatchers’ blaze segmentation will be the topic of the forthcoming paper [14].
We compare in Table 2 the use of our combined approach and the use of the manual line tool of the ImageJ software for the measurement of the blaze area. Namely, we measured in Fig. 1b and in Fig. 1c the ruler scale by means of the ImageJ line tool by considering, for each image, two different 3cm sections of the ruler; we then measured manually the number of pixels contained in each, divided each measurement by 30 and averaged the two results to obtain an estimate of the ruler scale (i.e. the number of pixels crossed by a 1mm horizontal or vertical line segment). We then measured the area of the blaze after segmenting it by means of the ‘magicwand’ [50] ImageJ tool and trapezium fitting [54] (see Fig. 2). The results are reported in Table 2 both as number of image pixels inside the blaze and in \(\hbox {mm}^2\), where this second value has been calculated using the measurement scale detected as described above. We then repeated such measurements using our fully automated Hough transform method for ruler scale detection, reporting as above the measurements of the blaze area computed both as number of image pixels and in \(\hbox {mm}^2\). We observe a good level of accuracy of our combined method (see also Table 1) with respect to the ‘magicwand’ manual approach of Moreno [50], while, unsurprisingly, the blaze measurements obtained by pure trapezium fitting as proposed by Potti and Montalvo in [54] tend to overestimate the area of the blaze.
4.2 Moles Monitoring for Melanoma Diagnosis and Staging
In this section, we focus on another application of the scale detection Algorithm 2 in the context of melanoma (skin cancer) monitoring, see Fig. 3. Early signs of melanoma are sudden changes in existing moles and are encoded in the mnemonic ABCD rule. They are Asymmetry, irregular Borders, variegated Colour and Diameter.^{3} In the following, we focus on the D sign.
In the following examples reported in Fig. 16, we use the graph segmentation approach described in algorithm 1 where texture characteristic regions are present (see Fig. 16a) and the Chan–Vese model [18] for images characterised by homogeneity of the mole and skin regions and the regularity of mole boundaries (Fig. 16b, c). For the numerical implementation, we use the freely available online IPOL Chan–Vese segmentation code [29]. Let us point out here that previous works using variational models for accurate melanoma segmentation already exist in the literature, see [1, 17], but in those no measurement technique is considered.
4.3 Other Applications: Animal Tracks and Archaeological Finds’ Measurement
We conclude this section presenting some other applications for the combined segmentation and scale detection models presented above.
The first application is the identification and classification of animals living in a given area through their soil, snow and mud footprints. Their quantitative measurement is also interesting in the study of the age and size a of a particular animal species. As in the problems above, such measurement very often reduces to a very inaccurate measurement performed with a ruler placed next to the footprint image. In Fig. 17a,^{4} our combined method is applied for the measurement of a whitetailed deer footprint.
As a final application, we focus on archaeology. In many archaeological finds, objects need to be measured for comparisons and historical studies [34]. Figure 17b shows the application of our method to coin measurements. Due to its circular shape, for this image a combined Hough transform method for circle and line detection has been used. The example image is taken from [34] where the authors propose a gradient thresholdbased method combined with a Fourier transform approach. Despite being quite efficient for the particular applications considered, such approach relies in practice on the good experimental setting in which the image is taken: almost noisefree images and very regular objects with sharp boundaries (mainly coins) and homogeneous backgrounds are considered. Furthermore, results are reported only for rulers with vertical orientation and no bending.
5 Conclusions
In this paper, we consider image segmentation applications involving measurement of a region’s size, which has applications in several disciplines. For example, zoologists may be interested in quantitative measurements of some parts of the body of an animal, such as distinctive regions characterised by specific colours and texture, or in animal tracks to differentiate between individuals in the species. In medical applications, quantifying an evolving, possibly malignant mass (for instance, skin melanoma) is crucial for an early diagnosis and treatment. In archaeology, finds need to be measured and classified. In all these applications, often a common measurement tool is juxtaposed to the region of interest and its measurement is simply read directly from the image. This practice is typically inaccurate and imprecise, due to the conditions in which pictures are taken. There may be noise corrupting the image, the object to be measured may be hard to distinguish, and the measurement tool can be misplaced and far from the object to measure. Moreover, the scale of the image depends on the image itself due to the varying distance from the camera of the ruler and objects to measure.
The method presented (based on [9]) consists of a semisupervised approach which, by training the algorithm with some examples provided by the user, extracts relevant features from the training image (such as RGB intensities, texture) and uses them to detect similar regions in the unknown image. Mathematically, this translates into the minimisation of the discrete Ginzburg–Landau functional defined on graphs. To overcome the computational issues due to the size of the data, Nyström matrix completion techniques are used and for the design of an efficient numerical scheme, convex splitting is applied. The measurement scale detection task is performed by using the Hough transform, a geometrical transformation which is capable of detecting objects with a priori known geometrical shapes (like lines on a ruler or circles with fixed diameter). Once the measurement scale is detected, all the measurements are converted into a unit of measure which is not image dependent.
Our method represents a systematic and reliable combination of segmentation approaches applied to several realworld image quantification tasks. The use of dictionaries, moreover, allows for flexibility as, whenever needed, the training database can be updated. With respect to recent developments [68] in the fields of data mining for the analysis of big data, where predictions are often performed using training sets and clustering, our approach represents an interesting alternative to standard machine learning (such as kmeans) algorithms.
Footnotes
 1.
Mole images from http://www.medicalprotection.org/uk/practicemattersissue3/skinlesionphotography, ©Chassenet/Science Photo Library, http://en.wikipedia.org/wiki/Melanoma (public domain), http://www.diomedia.com/stockphotocloseupofapapillomatousdermalnevusmolearaisedpigmentedskinlesionthatresultsfromaproliferationofbenignmelanocytesccidimage14515019.html, ©Phototake RM/ ISM.
 2.
‘Discrete GL functional with a data fidelity term’ would be a more accurate name, but we opt for brevity here.
 3.
Prevention: ABCD’s of Melanoma American Melanoma Foundation, http://www.melanomafoundation.org/prevention/abcd.htm.
 4.
Notes
Acknowledgments
Many thanks to Colm Caulfield who has introduced HMR and the bird segmentation problem to us mathematicians and to Andrea Bertozzi for her very useful comments on the manuscript. LC acknowledges support from the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis (CCA). CBS acknowledges support from EPSRC grants Nr. EP/J009539/1 and EP/M00483X/1. AF was supported under ONR grant N0001415WX01350. HMR was supported by research grants from the Royal Society Nr. RG64240 and the British Ecology Society Nr. BES2322, a Junior Research Fellowship from Churchill college. HMR is currently supported by an Institute Research Fellowship at the Institute of Zoology, by the Department of Zoology at the University of Cambridge, and Churchill College, Cambridge.
References
 1.Abbas, Q., Fondon, I., Sarmiento, A., Emre Celebi, M.: An improved segmentation method for nonmelanoma skin lesions using active contour model. In: Image Analysis and Recognition, Lecture Notes in Computer Science, pp. 193–200 (2014)Google Scholar
 2.Abramoff, M.D., Magalhães, P.G., Ram, S.J.: Image processing with ImageJ. Biophotonics Int. 11(7), 36–42 (2004)Google Scholar
 3.Allen, S.M., Cahn, J.W.: A microscopic theory for the antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall. 27, 1085–1095 (1979)CrossRefGoogle Scholar
 4.Ambrosio, L., Tortorelli, V.M.: Approximation of functionals depending on jumps by elliptic functionals via \(\varGamma \)convergence. Commun. Pure Appl. Math. 43, 999–1036 (1990)MathSciNetCrossRefMATHGoogle Scholar
 5.Ambrosio, L., Tortorelli, V.M.: On the approximation of free discontinuity problems. Boll. Un. Mat. Ital. B (7) 6(1), 105–123 (1992)MathSciNetMATHGoogle Scholar
 6.Barles, G., Georgelin, C.: A simple proof of convergence for an approximation scheme for computing motions by mean curvature. SIAM J. Numer. Anal. 32, 484–500 (1995)MathSciNetCrossRefMATHGoogle Scholar
 7.Belongie, S., Fowlkes, C., Chung, F., Malik, J.: Partitioning with Indefinite Kernels Using the Nyström Extension. ECCV, Copenhagen (2002)CrossRefMATHGoogle Scholar
 8.Beltrametti, M.C., Massone, A., Piana, M.: Hough transform of special classes of curves. SIAM J. Imaging Sci. 6(1), 391–412 (2013)MathSciNetCrossRefMATHGoogle Scholar
 9.Bertozzi, A.L., Flenner, A.: Diffuse interface models on graphs for classification of high dimensional data. Multiscale Model. Simul. 10(3), 1090–1118 (2012)MathSciNetCrossRefMATHGoogle Scholar
 10.Braides, A.: Approximation of Freediscontinuity Problems, vol. 1694 of Lecture Notes in Mathematics. Springer, Berlin (1998)Google Scholar
 11.Braides, A., Dal Maso, G.: Nonlocal approximation of the Mumford–Shah functional. Calc. Var. Partial Differ. Equ. 5(4), 293–322 (1997)MathSciNetCrossRefMATHGoogle Scholar
 12.Brokate, M., Sprekels, J.: Hysteresis and Phase Transitions, Applied Mathematical Sciences, vol. 121. Springer, New York (1996)CrossRefMATHGoogle Scholar
 13.Burger, W., Burge, M.J.: Digital Image Processing—An Algorithmic Introduction Using Java. Springer, Berlin (2009)MATHGoogle Scholar
 14.Calatroni, L., van Gennip, Y., Schönlieb, C.B., Flenner, A., Coffey, P., Rowland, H.M.: Intraspecific variation in the head patches of male pied flycatchers (in preparation)Google Scholar
 15.Canny, J.F.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)CrossRefGoogle Scholar
 16.Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. Int. J. Comput. Vis. 22(1), 61–79 (1997)CrossRefMATHGoogle Scholar
 17.Cavalcanti, P.G., Scharcanski, J.: Macroscopic pigmented skin lesion segmentation and its influence on lesion classification and diagnosis. In: Color Medical Image Analysis, Lecture Notes in Computational Vision and Biomechanics, vol. 6. Springer, Berlin (2013)Google Scholar
 18.Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)CrossRefMATHGoogle Scholar
 19.Chan, T.F., Sanberg, B., Vese, L.A.: Active contours without edges for vectorvalued images. J. Vis. Commun. Image Represent. 11, 130–141 (2000)CrossRefGoogle Scholar
 20.Chung, F.R.K.: Spectral Graph Theory, volume 92 of CBMS Regional Conference Series in Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington, DC (1997)Google Scholar
 21.Coifman, R.R., Lafon, S., Lee, A.B., Maggioni, M., Nadler, B., Warner, F., Zucker, S.W.: Geometric diffusions as a tool for harmonic analysis and structure definition of data: diffusion maps. Proc. Natl. Acad. Sci. 102(71), 7426–7431 (2005)CrossRefGoogle Scholar
 22.Dale, S., Slagsvold, T., Lampe, H.M., Sætre, G.P.: Population divergence in sexual ornaments: the white forehead patch of Norwegian pied flycatchers is small and unsexy. Evolution. 53, 1235–1246 (1999)Google Scholar
 23.Duda, R.O., Hart, P.E.: Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 15, 11–15 (1972)CrossRefMATHGoogle Scholar
 24.Esedoglu, S.: Blind deconvolution of bar code signals. Inverse Prob. 20(1), 121–135 (2004)MathSciNetCrossRefMATHGoogle Scholar
 25.Esedoglu, S., Shen, J.: Digital inpainting based on the Mumford–Shah–Euler image model. Eur. J. Appl. Math. 13(4), 353–370 (2002)MathSciNetCrossRefMATHGoogle Scholar
 26.Esedoglu, S., Tsai, Y.H.R.: Threshold dynamics for the piecewise constant Mumford–Shah functional. J. Comput. Phys. 211(1), 367–384 (2006)MathSciNetCrossRefMATHGoogle Scholar
 27.Fowlkes, C., Belongie, S., Chung, F., Malik, J.: Spectral grouping using the Nyström method. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 214–225 (2004)Google Scholar
 28.GarciaCardona, C., Merkurjev, E., Bertozzi, A.L., Flenner, A., Percus, A.G.: Multiclass data segmentation using diffuse interface methods on graphs. IEEE Trans. Pattern Anal. Mach. Intell. 36(8), 1600–1613 (2014)CrossRefMATHGoogle Scholar
 29.Getreuer, P.: Chan–Vese segmentation. Image Process. On Line 2, 214–224 (2012)CrossRefGoogle Scholar
 30.Gilboa, G., Osher, S.: Nonlinear operators with applications to image processing. Multiscale Model. Simul. 7(3), 1005–1028 (2008)MathSciNetCrossRefMATHGoogle Scholar
 31.Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 2nd edn. Prentice Hall, London (2002)Google Scholar
 32.Grah, J.: Methods for automatic mitosis detection and tracking in phase contrast images. M.Sc. thesis, University of Muenster (2014)Google Scholar
 33.Guattery, S., Miller, G.L.: On the quality of spectral separators. SIAM J. Matrix Anal. Appl. 19(3), 701–719 (1998)MathSciNetCrossRefMATHGoogle Scholar
 34.Herrmann, M., Zambanin, S., Kampel, M.: Imagebased measurement of ancient coins, making history interactive. In: Computer Applications and Quantitative Methods in Archaeology (CAA), Proceedings of the 37th International Conference, Archaeopress, Oxford, pp. 117–121 (2010)Google Scholar
 35.Hough, P.V.C.: Method and means for recognizing complex patterns. US Patent 3,069,654 (1962)Google Scholar
 36.Hu, H., Sunu, J., Bertozzi, A.L.: Multiclass graph Mumford–Shah model for plume detection using the MBO scheme. In: Tai X.C., et al. (eds.)Proceedings of EMMCVPR Hong Kong 2015, Springer Lecture Notes in Computer Science, vol. 8932, pp. 209–222 (2015)Google Scholar
 37.Järvistö, P.E., Laaksonen, T., Calhim, S.: Forehead patch size predicts the outcome of male–male competition in the pied flycatcher. Ethology 119(8), 662–670 (2013)CrossRefGoogle Scholar
 38.Kaas, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)CrossRefGoogle Scholar
 39.Kohn, R.V., Sternberg, P.: Local minimisers and singular perturbation. Proc. R. Soc. Edinb. Sect. A 111(1–2), 69–84 (1989)MathSciNetCrossRefMATHGoogle Scholar
 40.Lundberg, A., Alatalo, R.V.: The Pied Flycatcher. T & AD Poyser, London (1992)Google Scholar
 41.Massone, A.M., Perasso, A., Campi, C., Beltrametti, M.C.: Profile detection in medical and astronomical images by means of the Hough transform of special classes of curves. J. Math. Imaging Vis. 51(2), 296–310 (2015)MathSciNetCrossRefMATHGoogle Scholar
 42.Merkujev, E., Bae, E., Bertozzi, A., Tai, X.C.: Global binary optimization on graphs for classification of highdimensional data. J. Math. Imaging Vis. 52(3), 414–435 (2015)MathSciNetCrossRefMATHGoogle Scholar
 43.Merkurjev, E., GarciaCardona, C., Bertozzi, A.L., Flenner, A., Percus, A.G.: Diffuse interface methods for multiclass segmentation of highdimensional data. Appl. Math. Lett. 33, 29–34 (2014)MathSciNetCrossRefMATHGoogle Scholar
 44.Merkurjev, E., Kostic, T., Bertozzi, A.L.: An MBO scheme on graphs for segmentation and image processing. SIAM J. Imaging Sci. 6(4), 1903–1930 (2013)MathSciNetCrossRefMATHGoogle Scholar
 45.Merkurjev, E., Sunu, J., Bertozzi, A.L.: Graph MBO method for standoff detection in hyperspectral video. In: Proc. Int. Conf. Image Proc, Paris, pp. 689–693 (2014)Google Scholar
 46.Merriman, B., Bence, J., Osher, S.: Diffusion generated motion by mean curvature, In: Proceedings of the Computational Crystal Growers Workshop, Providence, Rhode Island, pp. 79–83 (1992)Google Scholar
 47.Modica, L., Mortola, S.: Il limite nella \(\varGamma \)convergenza di una famiglia di funzionali ellittici. Boll. Unione Mat. Ital. V. Ser. A 14, 526–529 (1977)MathSciNetMATHGoogle Scholar
 48.Modica, L., Mortola, S.: Un esempio di \(\varGamma \)convergenza. Boll. Unione Mat. Ital. V. Ser. B 14, 285–299 (1977)MathSciNetMATHGoogle Scholar
 49.Morales, J., Moreno, J., Merino, S., Sanz, J.J., Tomas, G., Arriero, E., et al.: Female ornaments in the Pied Flycatcher Ficedula hypoleuca: associations with age, health and reproductive success. Ibis 149(2), 245–254 (2007)CrossRefGoogle Scholar
 50.Moreno, J., Velando, A., RuizDeCastañeda, R., Cantarero, A., GonzalezBraojos, S., Redondo, A.: Plasma antioxidant capacity and oxidative damage in relation to male plumage ornamental traits in a montane Iberian Pied Flycatcher Ficedula hypoleuca population. Acta Ornithol. 46(1), 65–70 (2011)CrossRefGoogle Scholar
 51.Mumford, D., Shah, J.: Optimal approximation by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42, 577–685 (1989)MathSciNetCrossRefMATHGoogle Scholar
 52.Naeini, M.M., Dutton, G., Rothley, K., Mori, G.: Action recognition of insects using spectral clustering. In: MVA 2007 IAPR Conference on Machine Vision Applications (2007)Google Scholar
 53.Nyström, E.J.: Über die Praktische Auflösung von Linearen Integralgleichungen mit Anwendungen auf Randwertaufgaben der Potentialtheorie. Commentationes PhysicoMathematicae 4(15), 1–52 (1928)Google Scholar
 54.Potti, J., Montalvo, S.: Male arrival and female mate choice in Pied Flycatchers Ficedula Hypoleuca in central Spain. Ornis Scand. 22(1), 45–54 (1991)CrossRefGoogle Scholar
 55.Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)MathSciNetCrossRefMATHGoogle Scholar
 56.Rubinstein, J., Sternberg, P., Keller, J.B.: Fast reaction, slow diffusion, and curve shortening. SIAM J. Appl. Math. 49, 116–133 (1989)MathSciNetCrossRefMATHGoogle Scholar
 57.Ruuskanen, S., Lehikoinen, E., Nikinmaa, M., Siitari, H., Waser, W., Laaksonen, T.: Longlasting effects of yolk androgens on phenotype in the pied flycatcher (Ficedula hypoleuca). Behav. Ecol. Sociobiol. 67(3), 361–372 (2013)CrossRefGoogle Scholar
 58.Sætre, G.P., Mourn, T., Bures, S., Kral, M., Adamjan, M., Moreno, J.: A sexually selected character displacement reinforces predating isolation. Nature 387, 589–592 (1997)CrossRefGoogle Scholar
 59.Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)CrossRefGoogle Scholar
 60.Sirkiä, P.M., Laaksonen, T.: Distinguishing between male and territory quality: females choose multiple traits in the pied flycatcher. Anim. Behav. 78(5), 1051–1060 (2009)CrossRefGoogle Scholar
 61.Stör, M., Wagner, F.: A simple mincut algorithm. J. ACM 44(4), 585–591 (1997)MathSciNetCrossRefMATHGoogle Scholar
 62.Upton, G., Cook, I.: Understanding Statistics. Oxford University Press, Oxford (1997)MATHGoogle Scholar
 63.van Gennip, Y., Bertozzi, A.L.: \(\varGamma \)convergence of graph Ginzburg–Landau functionals. Adv. Differ. Equ. 17(11–12), 1115–1180 (2012)MathSciNetMATHGoogle Scholar
 64.van Gennip, Y., Guillen, N., Osting, B., Bertozzi, A.L.: Mean curvature, threshold dynamics, and phase field theory on finite graphs. Milan J. Math. 82(1), 3–65 (2014)MathSciNetCrossRefMATHGoogle Scholar
 65.Varma, M., Zisserman, A.: A statistical approach to texture classification from single images. Int. J. Comput. Vis. 62(61), 61–81 (2005)Google Scholar
 66.Vese, L.A., Chan, T.F.: Reduced nonconvex functional approximation for image restoration & segmentation. UCLA Report Cam 97–56 (1997)Google Scholar
 67.von Luxburg, U.: A tutorial on spectral clustering. Technical Report No. TR149, Max Planck Institute for Biological Cybernetics (2006)Google Scholar
 68.Witten, I.H., Frank, E., Hall, M.A.: Data Mining: Practical Machine Learning Tools and Techniques. Elsevier, Amsterdam (2011)Google Scholar