# Color image segmentation using multi-level thresholding approach and data fusion techniques: application in the breast cancer cells images

- 11k Downloads
- 16 Citations

## Abstract

In this article, we present a new color image segmentation method, based on multilevel thresholding and data fusion techniques which aim at combining different data sources associated to the same color image in order to increase the information quality and to get a more reliable and accurate segmentation result. The proposed segmentation approach is conceptually different and explores a new strategy. In fact, instead of considering only one image for each application, our technique consists in combining many realizations of the same image, together, in order to increase the information quality and to get an optimal segmented image. For segmentation, we proceed in two steps. In the first step, we begin by identifying the most significant peaks of the histogram. For this purpose, an optimal multi-level thresholding is used based on the two-stage Otsu optimization approach. In the second step, the evidence theory is employed to merge several images represented in different color spaces, in order to get a final reliable and accurate segmentation result. The notion of mass functions, in the Dempster-Shafer (DS) evidence theory, is linked to the Gaussian distribution, and the final segmentation is achieved, on an input image, expressed in different color spaces, by using the DS combination rule and decision. The algorithm is demonstrated through the segmentation of medical color images. The classification accuracy of the proposed method is evaluated and a comparative study versus existing techniques is presented. The experiments were conducted on an extensive set of color images. Satisfactory segmentation results have been obtained showing the effectiveness and superiority of the proposed method.

## Keywords

image segmentation multi-level thresholding medical color image Dempster-Shafer's evidence theory data fusion conflict## 1. Introduction

Image segmentation is considered as an important basic operation for meaningful analysis and interpretation of acquired images [1, 2]. It is a classic inverse problem which consists of achieving a compact region-based description of the image scene by decomposing it into meaningful or spatially coherent regions sharing similar attributes.

Over the last few decades, several segmentation techniques, either in gray level or color images, were presented in literature and many methodologies have been proposed. There is still no segmentation technique that can dominate the others for all kinds of color images yet [3, 4]. Our interest in this study is to segment medical color images. Many different techniques have been developed for this purpose. Some formulations have been expressed by Harrabi and Ben Braiek [5] and Ben Chaabane et al. [6]. In the most of the existing color image segmentation approaches, the definition of a region is based on similar color. Monochrome image segmentation techniques [7] can be extended to color image, by using the RGB color space or their transformations (linear/nonlinear).

Conventional color image segmentation techniques include thresholding techniques [5, 6, 8], data fusion techniques [9, 10, 11], and fuzzy logic [12, 13]. Preliminary studies using fuzzy techniques such as Fuzzy C-Means (FCM) [14] and Hard C-Means (HCM) algorithms [15] have also been reported in literature. However, FCM algorithm has a considerable difficulty in noisy environments, and the memberships resulting from this algorithm do not always correspond to the intuitive concept of degree of belonging or compatibility. The membership degrees are computed using only gray levels and do not take into account the spatial information of pixels with respect to one another. Also, the HCM [15] is one of the oldest clustering methods in which HCM memberships are hard (i.e., 1 or 0).

An ideal segmentation method should have a classification rate of 100% and a false detection rate of 0%. In fact, the adaptation of segmentation techniques to different color images remains as a challenging task.

Recently, data fusion techniques have been tested for medical image segmentation [16]. The data fusion is a technique which simultaneously takes into account heterogeneous data coming from different sources, in order to obtain an optimal set of objects for investigation. The most significant advantage of using data fusion techniques is to handle uncertain, imprecise, and incomplete information. The main drawback of this technique is the prohibitive processing time. Over the existing data fusion methods such as evidence theory [16], probability theory [17], fuzzy logic [18], possibility theory [19], etc., the Dempster-Shafer (DS) evidence theory [20, 21] offers a powerful and flexible mathematical tool for handling uncertain, imprecise, and incomplete information, despite its use of the determination of mass functions in image segmentation remains a hard task. In the past, many authors have addressed this problem using different methods [16, 21, 22]. In this context, Zimmerman and Zysno [23] have proposed a mass function's estimation method based on the distance of the point from a prototypical member. However, the major factors that influence the determination of the appropriate groups of points are the distance measure chosen to the problem at hand.

Most recent studies in color image segmentation [21, 24, 25] have used the DS evidence theory to fuse one-by-one the pixels coming from the three components (Red, Green, and Blue) of original image, in order to increase the quality of information and to obtain an optimal segmented image.

In this context, Ben Chaabane et al. [21] aim at providing help to the doctor for the follow-up of the diseases of the breast cancer. The objective is to rebuild each cell from the three primitive colors (R, G, and B) of the original image. From an initial segmentation obtained by using the histogram thresholding, one seeks a segmentation which represents as well as possible the points really forming part of the cells, as also the number of the cells. The methodology (*DDS*) is based on the application of the evidence theory to fuse the information's coming from the three images (R, G, and B).

With the same objective, Ben Chaabane et al. [24] have extended the general idea of mass function estimation in the DS evidence theory of the histogram to the homogeneity domain (*HHDS*) to take into account the spatial information. The homogeneity histogram is used to express the local and global information among pixels in an image. The authors have shown through empirical studies that a good model of the mass functions estimation in the DS evidence theory is based on the assumption of Gaussian distribution (GD) and the homogeneity histogram analysis technique.

In particular, several researchers have investigated the relationship between fuzzy sets and DS evidence theory. Most analytic fuzzy approaches are derived from Bezdek's FCM algorithm applied to the grey level images to automatically determine the membership degree of each pixel.

The general idea (*FCMDS*) proposed by Ben Chaabane et al. [4] is to assign, at each image pixel level, a mass function that corresponds to a membership degree obtained by applying FCM algorithm to the gray level of the image. However, this algorithm has a considerable drawback in noisy environments and the membership degrees are computed using only the grey levels and do not take into account the spatial information of pixels with respect to one other. To overcome this limitation, the authors have reformulated the fuzzy clustering problem so that the clustering method can be used to generate memberships with typical interpretation. This method called *PCMDS*[25] is based on the Possibilistic C-Means (PCM) algorithm [26].

The determination of the mass function does not only take into account the advantage of the fuzzy framework, but also considers the spatial relation of the membership degrees among neighboring pixels to explore the image features.

In fact, the main difference between the various methods cited in the references above lies in the method of mass functions estimation and in its application.

The evidence theory is employed to merge the three primitive colors (R, G, and B) of the same image in order to increase the quality of the information and to obtain an optimal segmented image. The estimation of mass functions in the DS evidence theory is based on the assumption of GD [16, 24], or fuzzy sets [4, 22, 25]. In principle, only one image is considered for each application, whereas many realizations of the same image fused together may be very helpful for the segmentation process.

In this context, Mignotte [27] has proposed a segmentation approach based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models. He also described the fusion strategy which aims at combining the segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions.

The main contribution of the algorithm proposed by Mignotte [27] lies in the use of several different color spaces and in a decentralized fusion procedure. The methodology is based on the application of the *K*-means clustering technique to fusion information. In fact, this method has successfully been applied on the Berkeley image database.

Examples [27] are provided, showing that the images provided by the {RGB, HIS, YIQ, XYZ, LAB, and LUV} color spaces are redundant and complementary. In this context, image segmentation using data fusion techniques appears to be an interesting method.

Data fusion is a technique which simultaneously takes into account heterogeneous data coming from different sources, in order to obtain an optimal set of objects for investigation. Of the existing data fusion methods such as probability theory [17], fuzzy logic [18], possibility theory [19], evidence theory [20], the DS evidence theory [20, 21] is a powerful and flexible mathematical tool for handling uncertain, imprecise, and incomplete information.

Modeling both uncertainty and imprecision computing the conflict between images, and introducing *a priori* information are the main features of this theory. An important property of this theory is its ability to merge different data sources in order to increase the information quality.

This article is devoted to fuse many realizations of the same images, applied to a specific kind of medical image segmentation, where we aim at providing a help to the doctor for the follow-up of the diseases of the breast cancer. The problem of color image segmentation is addressed using the DS theory.

In fact, this method may be seen to be a straightforward complement to the work proposed by Ben Chaabane et al. [4, 24, 25]. The objective is to rebuild each cell from a series of six images represented in different color spaces. The idea is based on multilevel thresholding and data fusion techniques. From an initial segmentation obtained by using a two-stage Otsu optimization approach (TSMO), applied to each image to be fused, one seeks a segmentation which represents as well as possible the cells. More precisely, this study proposes a fusion framework which aims at fusing several multi-level thresholding results applied on an input image expressed by pieces different color spaces. These different pieces of information are fused together by the DS evidence theory using as input features, the mass functions of each information extracted from the input image expressed in different color spaces, previously estimated and associated to each pixel. The assumption of GD is used to calculate the mass functions of each pixel. Once the mass functions are estimated for each image to be fused, the DS combination rule and decision are applied to obtain the final segmentation. Consequently, the proposed algorithm uses a centralized fusion model that requires the availability of all the images simultaneously, and no intermediate decision is taken before fusion.

This article demonstrates that the proposed fusion method, while being complex and required a large processing time for computing the mass functions of the information's to be combined and the DS orthogonal sum, improves the segmentation results in terms of segmentation sensitivity, in comparison with the recent segmentation methods existing in literature and applied on the color cells images database provided with permission from Cancer Service, Salah Azaiez Hospital, Bab Saadoun, Tunis, Tunisia.

The rest of this article is organized as follows: Section 2 introduces the proposed method of color image segmentation. Simulation examples are carried out in Section 3 to assess the proposed method. Performance characterization of the proposed method is given in Section 4.

## 2. Proposed method

In the framework of our application, we are interested in color image segmentation of cells in the breast images. The problem is to separate cells from the background. The initial segmentation maps which will then be fused together are simply given, in our application, by the *TSMO*[28], applied on an input image expressed by different color spaces and using as input the set of pixel values provided by these images.

The multilevel thresholding technique is used to extract homogeneous regions, in each image, to be fused. Once the mass functions are estimated by the assumption of GD, the DS combination rule is applied to obtain the final segmented image. Hence, the main idea of the proposed method is to fuse, one-by-one, the pixels of the input image expressed by six color spaces.

In this application, we use *N* s segmentations provided by the *N* s = 6 color spaces, namely the C = {RGB, HIS, YIQ, XYZ, LAB, and LUV} color spaces. The examples show that the images provided by these different sources are redundant and complementary [27]. In this sense, data fusion techniques appear as an appealing approach for color image segmentation.

The purpose of this study is to apply this method for medical images segmentation. We aim at providing assistance to the doctor to follow-up the diseases of the breast cancer. The objective is to rebuild each cell from a series of *N* s representative component images provided by the input image expressed in *N* s color spaces. From an initial segmentation obtained by using the histogram thresholding technique [28], one seeks a segmentation which represents as well as possible the cells, in order to give to the doctors a schema of the points really forming part of the cells, as also the number of the cells.

The selection of the best and representative component images is based on the segmentation sensitivity criterion [25]. The best component images used in our application are the R, H, Y, X, A, and L components for the input image expressed in the {RGB, HIS, YIQ, XYZ, LAB, and LUV} color spaces, respectively.

To do this, histogram thresholding technique is applied to the 18 redundant features (R, G, B, H, S, V, Y, I, Q, X, Y, Z, L, A, B, L, U, and V) and the feature with the best segmentation sensitivity is selected in each color space.

To illustrate as there are many incorrectly segmented pixels by the green and blue features for this given specific class of images (color cells images), the red feature is selected as a best and representative feature in the RGB color space. This selective operation is repeated for all color spaces used in our application.

The concept of the two-stage Otsu thresholding technique [28] is used to find the priori knowledge such as the mean (μ) and the standard deviation (σ) of each region of the images to be fused. The idea of the information representation is based on the assumption of a GD. Once the measures are determined for each component image to be fused, the DS combination rule and decision are applied to obtain the final segmentation.

The evidence theory, also called DS theory, was first introduced by Dempster [29], and formalized by Shafer [30]. This theory is often described as a generalization of the Bayesian theory to represent at the same time the inaccurate and uncertain information. It defines a framework of understanding representing all the subsets of the classes' spaces. The principal advantage of this theory is to affect a degree of confidence which is called mass function to all simple and composed classes, and to take into account the ignorance of the information.

*C*

_{ i }) generated by the multilevel thresholding method create the frame of discernment Ω composed of

*n*single mutually exclusive subsets

*H*

_{ n }, which are symbolized by

*m*) which has assigning values in [0, 1] to each subset of the discernment set Ω. The function

*m*is defined from 2

^{Ω}to [0, 1] verifying

If *m*(*A*) > 0, *A* is called focal elements.

*Q*sources with the DS orthogonal rules [24]. The DS combination can be represented for

*Q*information sources by the following orthogonal rule:

where ⊕ is the sum of DS orthogonal rules.

*m*

^{12}) is calculated from the aggregation of two mass functions

*m*

^{1}and

*m*

^{2}associated with information sources

*S*

_{1}and

*S*

_{2}. Then

*K*is defined by [25]:

The normalization coefficient *K* evaluates the conflict between the sources *S*_{1} and *S*_{2}. This is determined by summing the products of mass functions of all sets where the intersection is an empty set.

The DS theory is applied in various areas [3, 4, 24, 25], but, in image segmentation, the determination of mass functions is a hard task and the performance of the segmentation scheme is, however, largely conditioned by the appropriate determination of the mass functions. In this study, the method of generating the mass functions is based on the assumption of a GD [24].

### 2.1. Mass function of simple hypotheses

*C*

_{ i }is obtained from the assumption of GDs of the information ${g}_{xy}^{q}$ of a pixel ${p}_{xy}^{q}$ at the location (

*x*,

*y*) from an input image expressed in the

*q*th feature to cluster

*i*as follows:

*μ*

_{ i }and ${\sigma}_{i}^{2}$ which represent, respectively, the mean and the variance on the class

*C*

_{ i }present in each feature, to be fused, are respectively estimated by

*n*_{ i } denoted the number of pixels in the class *C*_{ i } .

### 2.2. Mass function of double hypotheses

The advantage of DS theory is that the evidence can be associated with multiple possible events, for example, sets of events. One of the most important features of DS theory is that the model is designed to cope with varying levels of precision regarding the information.

*C*

_{1},

*C*

_{2},...,

*C*

_{ T }is determined as follows:

where ${\mu}_{j}=\frac{1}{T}\sum _{i=1}^{T}{\mu}_{i}$, *σ*_{ j } = max(*σ*_{1}, *σ*_{2},...,*σ*_{ T } ) and 2 ≤ *T* ≤ *n*.

In our application, the determination of the mass function does not only take into account the advantage of the Gaussian model, but also considers the neighborhood information of the measures to explore the images features.

*x*,

*y*) in the window ${w}_{x{}_{y}}^{q}$. The spatial scanning order of an (

*N*×

*M*) image is performed, as shown in Figure 1, from left to right and top to bottom, pixel-by-pixel.

*x*,

*y*) from an input image expressed in the

*q*th color space within the window ${w}_{x{}_{y}}^{q}$ is computed by

where *x* ≥ (*t*+1)/2, *x*' ≤ *M*-((*t*-1)/2), *y* ≥ (*t*+1)/2, and *y*' ≤ *N*-((*t*-1)/2).

However, the size of the window has an effect on the computation of the final mass function value. The window should be big enough to allow enough information provided to the computation of each pixel measure. Furthermore, using a larger window in the computation of the mass function decreases the noise effect. Also, a larger window causes significant processing time. As a trade choice, experimentally a (7 × 7) window is chosen for computing the final mass function of each pixel *p*_{ xy } .

Note that this operation is commutative and associative.

In the case where the frame of discernment ^{Ω} of each feature to be combined is composed of two classes (*C*_{ i } ), the function *m* is defined from 2^{Ω} = {*C*_{1}, *C*_{2}, *C*_{1} ∪ *C*_{2}, *ϕ*} to [0, 1].

Specifically, the combination (called the joint ${\overline{m}}_{xy}^{12}$) is calculated from the aggregation of two mass functions ${\overline{m}}_{xy}^{1}$ and ${\overline{m}}_{xy}^{2}$ associated to the 1st and the 2nd features, i.e., the Red and the Hue features.

where ${K}_{1}={\overline{m}}_{xy}^{1}\left({C}_{2}\right).{\overline{m}}_{xy}^{2}\left({C}_{1}\right)+{\overline{m}}_{xy}^{1}\left({C}_{1}\right){\overline{.m}}_{xy}^{2}\left({C}_{2}\right)$.

*C*

_{ i }. The decision making is carried out on simple hypotheses that represent the classes in the images. If we accept the composite hypotheses as the final results in the decisional procedure, the segmentation results obtained would be more reliable but with a decreased precision. Consequently, the proposed method can be described by a flowchart given in Figure 3.

## 3. Experimental results

*TSMO-16*[28]. The size of the squared window used to compute the final mass functions of each pixel in the DS evidence theory is set to (7 × 7).

We use *N* s = 6 segmentations provided by the following color spaces RGB, HSV, YIQ, XYZ, LAB, and LUV.

We applied our fusion method to the same image expressed in different color spaces, and we compare the performance of our proposed algorithm to those in other published reports that have recently been applied to color images.

Figure 8b, c shows real medical cells images, obtained by a himi-histochimy coloring in the Cancer Service previously cited.

Figures 4, 5, 6, and 7 show the results of the proposed method. Figure 5 shows an example of the multi-level thresholding technique applied to the R, H, Y, X, A, and L features of the image expressed in the RGB, HSV, YIQ, XYZ, LAB, and LUV color spaces, respectively, and the final segmentation map which results of the fusion of these (*N* s = 6) clusterings.

In fact, the experimental result presented in Figure 5 at bottom right is quite consistent with the visualized color distribution in the objects, which makes it possible to determine the cells number. The other resulting images contain some holes and missing features in the cells. This demonstrates the necessity of using the fusion process.

Segmentation sensitivity for, respectively, the clustering result expressed in each color space and the fusion result given by our algorithm for the dataset shown in Figure 4

Sensitivity segmentation (%) | |||||||
---|---|---|---|---|---|---|---|

| | | | | | | |

Image 1 | 0.9770 | 0.8796 | 0.9582 | 0.9618 | 0.9695 | 0.9719 | 0.9959 |

Image 2 | 0.9364 | 0.8609 | 0.9409 | 0.9563 | 0.9339 | 0.9641 | 0.9944 |

Image 3 | 0.9359 | 0.8743 | 0.9583 | 0.9600 | 0.9395 | 0.9658 | 0.9879 |

Image 4 | 0.9536 | 0.8722 | 0.9711 | 0.9713 | 0.9441 | 0.9676 | 0.9927 |

Image 5 | 0.9147 | 0.8747 | 0.9466 | 0.9498 | 0.9166 | 0.9671 | 0.9926 |

Image 6 | 0.9577 | 0.8571 | 0.9649 | 0.9669 | 0.9398 | 0.9685 | 0.9898 |

Image 7 | 0.9487 | 0.8642 | 0.9658 | 0.9661 | 0.9359 | 0.9682 | 0.9831 |

Image 8 | 0.9785 | 0.8726 | 0.9767 | 0.9776 | 0.9652 | 0.9734 | 0.9982 |

Image 9 | 0.9770 | 0.9295 | 0.9541 | 0.9593 | 0.9619 | 0.9754 | 0.9912 |

Image 10 | 0.8576 | 0.9918 | 0.9975 | 0.9972 | 0.9943 | 0.9952 | 0.9991 |

Image 11 | 0.9865 | 0.9971 | 0.9975 | 0.9977 | 0.9858 | 0.9748 | 0.9982 |

Image 12 | 0.9578 | 0.9666 | 0.8297 | 0.8065 | 0.8096 | 0.9385 | 0.9881 |

The proposed segmentation approach is conceptually different and explores a new strategy; in fact, instead of considering only one image for each application [4, 16, 24, 25], many realizations of the same image fused together may be very helpful to the segmentation process. The idea is to fuse one-by-one the pixels coming from different information sources (the input image expressed in six color spaces), in order to avoid the over-segmentation and to obtain an optimal segmented image.

Figure 6 displays some examples of segmentations obtained by our algorithm, compared with other methods [16, 21, 22, 24].

The comparison of the proposed approach will be presented through the next experiment. Figure 6b-e shows the final segmentation results obtained from the *DDS*, the *FCMDS*, the *HHDS*, and our *TSMODS* algorithms, respectively.

In fact, the experimental results presented in Figure 6e are quite consistent with the visualized color distributions in the objects, which make it possible to do an accurate measurement of cell volumes. In short, the proposed algorithm outperforms all these well-known segmentation algorithms in terms of segmentation sensitivity (Sen(%)).

Segmentation sensitivity From DDS, FCMDS, HHDS, and TSMODS for the dataset shown in Figure 4

Sen(%) | ||||
---|---|---|---|---|

| | | | |

Image 1 | 0.9525 | 0.9741 | 0.9730 | 0.9959 |

Image 2 | 0.9260 | 0.9453 | 0.9782 | 0.9944 |

Image 3 | 0.9491 | 0.9465 | 0.9476 | 0.9879 |

Image 4 | 0.9595 | 0.9604 | 0.9658 | 0.9927 |

Image 5 | 0.9293 | 0.9697 | 0.9706 | 0.9926 |

Image 6 | 0.9565 | 0.9567 | 0.9637 | 0.9898 |

Image 7 | 0.9542 | 0.9550 | 0.9606 | 0.9831 |

Image 8 | 0.9685 | 0.9687 | 0.9798 | 0.9982 |

Image 9 | 0.9492 | 0.9747 | 0.9699 | 0.9912 |

Image 10 | 0.9105 | 0.9350 | 0.9387 | 0.9991 |

Image 11 | 0.9927 | 0.9925 | 0.9972 | 0.9982 |

Image 12 | 0.9765 | 0.9861 | 0.9769 | 0.9897 |

To evaluate the performance of the proposed segmentation algorithm, its accuracy was recorded.

Regarding the accuracy, Tables 1 and 2 list the segmentation sensitivity of the different methods for the dataset used in the experiment.

where Sens, *N*_{pcc}, *N* × *M* are, respectively, the segmentation sensitivity(%), number of correctly classified pixels, and dimension of the image.

The correctly classified pixel denotes a pixel with a label equals to its corresponding pixel in the reference image as shown in Figure 6f. The labeling of the original image is generated by the user based on the image used for segmentation. Consequently, the image segmentation ground truths is generated manually by the doctor (specialist) using the original image. Figure 6f shows the ideal segmented image.

The performance of the proposed method is quite acceptable.

In fact, from Table 2, one can observe from Figure 6b-d that 5.09, 5.35, and 5.24% of pixels were incorrectly segmented for the *DDS*, *FCMDS*, and *HHDS* methods, respectively.

Indeed, only 1.21% of pixels were incorrectly segmented in Figure 6e. This good performance between these methods can also be easily assessed by visually comparing the segmentation results.

We have also shown in Figure 7 the influence of the window size (used to estimate the final mass functions) on the segmentation results. Figure 7b-f shows the final segmentation results obtained from the proposed algorithm by using a sliding window of size (3 × 3), (5 × 5), (7 × 7), (9 × 9), and (11 × 11) to estimate the final mass functions, respectively, when a "salt and pepper" noise of *D* density is added to the original image *I*, shown in Figure 7a.

This affects approximately (*D* × (*N* × *M*)) pixels. The value of *D* is 0.02.

These tests show that the segmentation sensitivity (Sen(%)) is too much sensitive to this parameter. In brief, the experimental results conform to the visualized color distribution in the objects, when the size of the squared window is chosen (7 × 7).

We can also notice (see Figure 9) that the performance measure (Sen(%)) is only 0.935 when the segmentation number *N* s is equal to 1. The segmentation sensitivity is rather high, up to 0.9879, when six segmentations (*N* s = 6) are used. This experiment shows the validity of our fusion procedure and also the significantly improved performance in segmentation. The proposed method can be useful for color image segmentation.

## 4. Conclusions

In this article, the authors have presented a new segmentation strategy based on a fusion procedure whose goal is to combine several segmentation maps in order to finally get a more reliable and efficient segmentation with good accuracy.

The proposed segmentation approach is conceptually different and explores a new strategy. In fact, instead of considering only one image for each application, many realizations of the same image fused together may be very helpful to the segmentation process. The idea is to fuse one-by-one the pixels coming from different information sources, in order to get a final reliable and accurate segmentation result.

The obtained results demonstrated the significant improved performance in segmentation. This fusion method remains general enough to be applied in various computer vision applications and to show its great extensive use in other applications in the field of medical image segmentation and enhancement.

## Notes

### Acknowledgements

The authors would like to thank Dr. Khaled Ben Romdhane, from the Cancer Service of Salah Azaiez Hospital, Bab Saadoun, Tunis, for his help and his thoughtful comments.

## Supplementary material

## References

- 1.Kwon MJ, Han YJ, Shin IH, Park HW: Hierarchical fuzzy segmentation of brain MR images.
*Int J Imag Syst Technol*2003, 13: 115-125. 10.1002/ima.10035CrossRefGoogle Scholar - 2.Navon E, Miller O, Averbuch A: Colour image segmentation based on adaptive local thresholds.
*Image Vis Comput*2005, 23(1):69-85. 10.1016/j.imavis.2004.05.011CrossRefGoogle Scholar - 3.Gautier L, Taleb-Ahmed A, Rombaut M, Postaire JG, Leclet H: Decision support of image segmentation by the Dempster-Shafer theory: application to a sequence of IRM images.
*Edit Sc et med Elsevier SAS*2001, 22: 378-392.Google Scholar - 4.Ben Chaabane S, Sayadi M, Fnaiech F, Brassart E: Dempster-Shafer evidence theory for image segmentation: application in cells images.
*Int J Signal Process*2009, 5(1):126-132.Google Scholar - 5.Harrabi R, Ben Braiek E: Color image segmentation using automatic thresholding techniques.
*SSD'2011, Tunisia*2011, 1-6.Google Scholar - 6.Ben Chaabane S, Sayadi M, Fnaiech F, Brassart E: Color image segmentation using automatic thresholding and the fuzzy C-means techniques.
*IEEE Mediterranean Electrotechnical Conference, MELECON'2008, Ajaccio-France*2008, 857-861.CrossRefGoogle Scholar - 7.Harrabi R, Ben Braiek E: A comparative study of color image segmentation techniques using different color representation.
*JTEA, Tunisia*2010, 1-6.Google Scholar - 8.Cheng HD, Jiang XH, Wang J: Color image segmentation based on homogram thresholding and region merging.
*Pattern Recognit*2002, 25: 373-393.CrossRefGoogle Scholar - 9.Ben Chaabane S, Sayadi M, Fnaiech F, Brassart E: Color image segmentation based on Dempster-Shafer evidence theory.
*14th IEEE Mediterranean Electrotechnical Conference, MELECON 2008*2008, 862-866.CrossRefGoogle Scholar - 10.Bloch I, Maitre H: Fusion of image information under imprecision. In
*Aggregation and Fusion of Imperfect Information*. Edited by: Bouchon-Meunier B. Physical Verlag, Springer; 1997:189-213. Series Studies in FuzzinessGoogle Scholar - 11.Bloch I: Information combination operators for data fusion: a comparative review with classification.
*IEEE Trans SMC*1995, 26(1):52-67.Google Scholar - 12.Bezdek JC:
*Pattern Recognition with Fuzzy Objective Function Algorithms*. Plenum Press, New York; 1981.zbMATHCrossRefGoogle Scholar - 13.Liew AWC, Leung SH, Lau WH: Fuzzy image clustering incorporating spatial continuity.
*IEE Proc Visual Image Signal Process*2000, 147: 185-192. 10.1049/ip-vis:20000218CrossRefGoogle Scholar - 14.Yang Y, Zheng Ch, Lin P: Fuzzy C-means clustering algorithm with a novel penalty term for image segmentation.
*Opto-Electron Rev*2005, 13(4):309-315.Google Scholar - 15.Duda R, Hart P:
*Pattern Classification and Scene Analysis*. Wiley, New York; 1973.zbMATHGoogle Scholar - 16.Vannoorenberghe P, Colot O, Brucq DD: Colour image segmentation using Dempster-Shafer's theory.
*Proc ICIP'99*1999, 300-304.Google Scholar - 17.Bradley R: A unified Bayesian decision theory.
*Theory Decis*2007, 63(3):233-263. 10.1007/s11238-007-9029-3zbMATHCrossRefGoogle Scholar - 18.Shao-Long D, Jian-Ming W, Tao X, Hai-Tao L: Constraint-based fuzzy optimization data fusion for sensor network localization.
*Second International Conference on Semantics Knowledge and Grid, SKG '06*2006, 59-59.CrossRefGoogle Scholar - 19.Dubois D, Prade H: Possibility theory and its applications: a retrospective and prospective view.
*The 12th IEEE International Conference on Fuzzy Systems, FUZZ '03*2003, 1: 5-11.MathSciNetCrossRefGoogle Scholar - 20.Denoeux T: A
*k*-nearest neighbour classification rule based on Dempster-Shafer theory.*IEEE Trans Syst Man Cybern*1995, 25(5):804-813. 10.1109/21.376493CrossRefGoogle Scholar - 21.Ben Chaabane S, Sayadi M, Fnaiech F, Brassart E: Estimation of the mass function in the Dempster-Shafer's evidence theory using automatic thresholding for color image segmentation.
*International Conference on Signals, Circuits and Systems, SCS'08, Tunisia*2008, 1-5.Google Scholar - 22.Zhu YM, Dupuis O, Rombaut M: Automatic determination of mass functions in Dempster-Shafer theory using fuzzy C-means and spatial neighborhood information for image segmentation.
*Opt Eng*2002, 41(4):760-770. 10.1117/1.1457458CrossRefGoogle Scholar - 23.Zimmerman HJ, Zysno P: Quantifying vagueness in decision models.
*Eur J Oper Res*1985, 22: 148-158. 10.1016/0377-2217(85)90223-1CrossRefGoogle Scholar - 24.Ben Chaabane S, Sayadi M, Fnaiech F, Brassart E: Colour image segmentation using homogeneity approach and data fusion techniques.
*EURASIP J Adv Signal Process*2010, 2010: 1-11. ID 367297CrossRefGoogle Scholar - 25.Ben Chaabane S, Sayadi M, Fnaiech F, Brassart E, Betin F: A new method for the mass functions estimation in the Dempster-Shafer's evidence theory: application to color image segmentation.
*Circ Syst Signal Process*2011, 30: 55-71. 10.1007/s00034-010-9207-3zbMATHCrossRefGoogle Scholar - 26.Raghu K, Keller JM: A possibilistic approach to clustering.
*IEEE Trans Fuzzy Syst*1993, 1(2):98-110. 10.1109/91.227387CrossRefGoogle Scholar - 27.Mignotte M: Segmentation by fusion of histogram-based
*K*-means clusters in different color spaces.*IEEE Trans Imag Process*2008, 17(5):780-787.MathSciNetCrossRefGoogle Scholar - 28.Huang DY, Wang CH: Optimal multi-level thresholding using a two-stage Otsu optimization approach.
*Pattern Recognit Lett*2009, 30: 275-284. 10.1016/j.patrec.2008.10.003zbMATHCrossRefGoogle Scholar - 29.Dempster AP: Upper and lower probabilities induced by multivalued mapping.
*Ann Math Stat*1967, 38: 325-339. 10.1214/aoms/1177698950zbMATHMathSciNetCrossRefGoogle Scholar - 30.Shafer G:
*A Mathematical Theory of Evidence*. Princeton University Press, Princeton, NJ; 1976.zbMATHGoogle Scholar - 31.Grau V, Mewes AUJ, Alcaniz M, Kikinis R, Warfield SK: Improved watershed transform for medical image segmentation using prior information.
*IEEE Trans Med Imag*2004, 23(4):447-458. 10.1109/TMI.2004.824224CrossRefGoogle Scholar - 32.Duda RO, Hart PE, Sork DG:
*Pattern Classification*. Wiley-Interscience, New York; 2000.Google Scholar

## Copyright information

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.