Keywords

1 Introduction

Several pathologies affecting the retinal vascular structures due to diabetic retinopathy can be found in retinal images. Blood vessel segmentation from retinal images plays a crucial role for diagnosing complications due to hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke [1]. Automatic and accurate blood vessel segmentation system could provide several useful features for diagnosis of various retinal diseases, and reduce the doctors’ workload. However, the retinal images have low contrast, and large variability is presented in the image acquisition process [2], which deteriorates automatic blood vessel segmentation results.

Many studies for retinal vessel segmentation have been reported, including rule-based method [3], model-based method [47], matched filtering [810], and supervised method [2, 1114].

In this paper, we propose an automatic unsupervised segmentation method to partition the retinal images into two types: vessel and non-vessel. For improving the segmentation results, we construct a multi-dimensional feature vector with the green channel intensity and the enhanced intensity feature by the morphological operation. Then, an unsupervised neural network – self-organizing map (SOM) is exploited as the classifier for pixel clustering. Finally, we classify each neuron in the output layer of SOM as retinal neuron or non-vessel neuron with Otsu’s method, and get the final segmentation results.

The rest of this paper is organized as follow. Section 2 presents our proposed vessel segmentation method for retinal images. In Sect. 3, experimental results are presented, followed by the conclusion in Sect. 4.

2 Our Proposed Retinal Vessel Segmentation Method

In this section, a detailed description about our proposed segmentation method is presented. Firstly, a multi-dimensional feature vector is extracted for each pixel. Then, the algorithm based on neural network is proposed for automatic blood vessel segmentation.

2.1 Feature Extraction

Retinal images often show important lighting variations, poor contrast and noise [2]. In this paper, we expand each pixel of retinal image into a multi-dimensional feature vector, characterizing the image data beyond simple pixel intensities.

The Green Channel Intensity Feature.

In original RGB retinal images, the green channel shows the best vessel-background contrast, while the red and blue channels show low contrast and are noisy [2, 11]. So, we select the green channel from the RGB retinal image, and the green channel intensity of each pixel is taken as the intensity feature. Figure 1(a) is the original RGB retinal image from DRIVE database, and the green channel image is shown in Fig. 1(b).

Fig. 1.
figure 1

Illustration of the feature extraction process. (a) Original RGB retinal image. (b) The green channel of the original image. (c) Shade-corrected image. (d) Vessel enhanced image. (e) The segmentation result with our proposed method. (f) The manual segmentation result by the first specialist (Color figure online).

Vessel Enhanced Intensity Feature.

Retinal images often contain background intensity variation because of uniform illumination, which deteriorates the segmentation results. In the present work, the shade-correction method mentioned in [15] is used to remove the background lightening variations. The shade-correction image of Fig. 1(b) is presented in Fig. 1(c).

After background homogenization, the contrast between the blood vessels and the background is generally poor in the retinal images. Vessel enhancement is utilized for estimating the complementary image of the homogenized image, and subsequently applying the morphological top-hat transformation with a disc of eight pixels in radius. Figure 1(d) is the vessel enhancement image of Fig. 1(c).

In order to generate the features which could overcome the lighting variation, we integrate the enhanced intensity feature with the green channel intensity as the pixel feature vector.

2.2 Segmentation System

Self-organizing Map.

In present, neural-network-based method is often used in retinal image segmentation [11]. As an unsupervised clustering method, Kohonen’s self-organizing map (SOM) [16] is a two-layer feedforward competitive learning neural network that can discover the topological structure hidden in the data and display it in one or two dimensional space. Therefore, we exploit SOM method for blood vessel segmentation.

SOM consists of an input layer and a single output layer of M neurons which usually form a two-dimensional array. In the output layer, each neuron i has a d-dimensional weight vector \( w_{i} = [w_{i1} , \ldots ,w_{id} ] \). At each training step t, the input vector \( x_{p} \) of pixel p in the retinal image I is randomly chosen. Distance \( d_{{x_{p} ,i}} (t) \) between \( x_{p} (t) \) and each neuron i in the output layer is computed. The winning neuron c is the neuron with the weight vector closest to \( x_{p} \), \( c = \arg \mathop {\hbox{min} }\limits_{i} d_{{x_{p} ,i}} (t), \, i \in \{ 1, \ldots ,M\} . \)

A set of neighboring neurons of the winning node c is denoted as \( N_{c} \), which decreases its neighboring radius of the winning neuron with time. \( N_{t} (c,i) \) is the neighborhood kernel function around the winning neuron c at time t. The neighborhood kernel function is a non-increasing function of time t and of the distance of neuron i from the winning neuron c in the 2-D output layer. The kernel function can be taken as a Gaussian function

$$ N_{t} (c,i) = \exp ( - \frac{{\left\| {r_{i} - r_{c} } \right\|^{2} }}{{2N_{c}^{2} (t)}}) $$

where r i is the coordinate of neuron i on the output layer and \( N_{c} (t) \) is the kernel width. The weight-updating rule in the sequential SOM algorithm can be written as \( w_{i} (t + 1) = w_{i} (t) + \alpha (t)N_{t} (c,i)(x_{p} (t) - w_{i} )\quad \forall i \in N_{c} ,\,p \in I \). The parameter \( \alpha (t) \) is the learning rate of the algorithm. Generally, the learning rate \( \alpha (t) \) and the kernel width \( N_{c} (t) \) are monotonically decreasing functions of time [16].

SOM possesses some very useful properties. Kohonen [17] has argued that the density of the weight vectors assigned to an input region approximates the density of the inputs occupying this region. Second, the weight vectors tend to be ordered according to their mutual similarity.

In our work, we exploit self-organizing map [16] to cluster pixels in the retinal image. Vessels of the retinal image belong to the detail information. To reserve the thin and small vessels in the segmentation result, we set the size of output layer with 4 × 4. So, there are multiple neurons in the output layer (vessel neurons or non-vessel neurons) after SOM clustering.

Labeling the Output Neurons’ Class.

After clustering with SOM algorithm, there are multiple output neurons including vessel neurons and non-vessel neurons. We use Otsu’s method to estimate the neuron class.

Otsu’s method is used to automatically perform clustering-based image thresholding [18]. The algorithm assumes that the image contains two classes of pixels following bi-modal histogram (foreground pixels and background pixels), and then calculates the optimum threshold separating the two classes so that their combined spread (intra-class variance) is minimal [19].

Postprocessing.

Finally, in the visual inspection, small isolated regions misclassified as blood vessels are also observed. If the vessel region is connected with no more than 30\,pixels, it will be reclassified as non-vessel. The segmentation result of our proposed method is shown in Fig. 1(e).

3 Experimental Results

3.1 Database and Similarity Indices

The DRIVE database [13] is used in our experiments. This dataset is a public retinal image database, and is widely used by other researchers to test their blood vessel segmentation methods. Moreover, the DRIVE database provides two sets of manual segmentations made by two different observers for performance validation. In our experiments, performance is computed with the segmentation of the first observer as ground truth.

To quantify the overlap between the segmentation results and the ground truth for vessel pixels and non-vessel pixels, accuracy (Acc) are adopted in our experiments. The accuracy of our segmentation method.

For visual inspection, Fig. 2 depicts the blood vessel segmentation results on different retinal images from DRIVE database. Figure 2(a), (d) and (g) are original retinal images with different illumination conditions, and their segmentation results using our proposed method are shown in Fig. 2(b), (e) and (h) respectively. The manual segmentation results by the first specialist are presents in Fig. 2(c), (f) and (i) for visual comparison. It is evident that our method is robust to the low contrast and large variability in the retinal images, and gets accurate segmentation results.

Fig. 2.
figure 2

Examples of application of our segmentation method on three images with different illumination conditions. (a), (d), (g) Original RGB retinal images. (b), (e), (h) Segmentation results with our method. (c), (f), (i) The manual segmentation results by the first specialist (Color figure online).

In addition, we give a quantitative validation of our method on the DRIVE database with available gold standard images. Since the images dark background outside the field-of-view (FOV) is provided, accuracy (Acc) values are computed for each image considering FOV pixels only. The results are listed in Table 1, and the last row of the table shows average Acc value for 20 images in the database.

Table 1. Performance results on DRIVE database images, according to Acc value.

3.2 Comparing the Performance of Our Algorithm with Other Methods

In order to compare our approach with other retinal vessel segmentation algorithms, the average Acc value is used as measures of method performance. We compare our method with the following published methods: Martinez-Parez et al. [3], Jiang and Mojon [4], Chaudhuri et al. [8], Cinsdikici and Aydin [10], and Niemeijer et al. [12]. The comparison results are summarized in Table 2, which indicate our proposed method outperforms most of the other methods.

Table 2. Comparing the segmentation results of different algorithms with our method on DRIVE database in terms of average Acc value.

4 Conclusions

This study proposes a retinal vessel segmentation method based on neural network algorithm. To overcome the problem of low contrast and large variability in retinal images, we construct the feature vector with the intensity from green channel and the vessel enhanced intensity feature. Then, we classify the pixels in retinal image with SOM algorithm. Finally, we label each neuron in the output layer of SOM as retinal neuron or non-vessel neuron with Otsu’s method, and get the final segmentation results.

Our method is validated on the DRIVE database with available gold standard images. From the visual inspection and quantitative validation of our method in the experiments, it is evident that our method is robust to the low contrast and large variability in the retinal images, and gets accurate segmentation results. In addition, we compare our method with the state-of-art methods, and the experimental results indicate that out method outperforms most of the other methods.