Journal of Digital Imaging

, Volume 31, Issue 2, pp 224–234 | Cite as

Statistical Geometrical Features for Microaneurysm Detection

Article

Abstract

Automated microaneurysm (MA) detection is still an open challenge due to its small size and similarity with blood vessels. In this paper, we present a novel method which is simple, efficient, and real-time for segmenting and detecting MA in color fundus images (CFI). To do this, a novel set of features based on statistics of geometrical properties of connected regions, that can easily discriminate lesion and non-lesion pixels are used. For large-scale evaluation proposed method is validated on DIARETDB1, ROC, STARE, and MESSIDOR dataset. It proves robust with respect to different image characteristics and camera settings. The best performance was achieved on per-image evaluation on DIARETDB1 dataset with sensitivity of 88.09 at 92.65% specificity which is quite encouraging for clinical use.

Keywords

Diabetic retinopathy Mass screening Red lesion Microaneurysms Digital fundus images Object rule-based classification 

Introduction

Motivation

First, clinically visible signs of diabetic retinopathy (DR) are microaneurysms (MA). MA count indicates progression of DR. If the disease is recognized and treated at an early stage then treatment is quite effective. MA detection is therefore crucial for diagnosis and for monitoring the DR disease. Early detection facilitates timely treatment and reduces further complications of the retina. With increased diabetes instances worldwide, it is expected that by 2030, more than 500 million patients will need yearly retinal examination [9]. Considering these statistics and the limited human expertise available, we need to automate the system for detection of DR and grading its severity.

Background

Diabetes affects our body from head to toes. This includes our eyes. Eye complication because of diabetes is called DR which affects the retina and may cause blindness if left untreated. People with known long-term diabetes have some degree of retinopathy. There are two types of DR: non-proliferative and proliferative. Non-proliferative DR (NPDR) is early stage while proliferative DR (PDR) is an advanced stage of the disease where retina starts growing new blood vessels (neovascularization [10]). NPDR consists of dark/red lesions (MAs, hemorrhages (HM))and PDR consists of bright lesions (exudates, cotton wool spots).

DR is the major cause of complete or partial vision loss in patients suffering from long-term diabetes. It is an asymptomatic disease, i.e., patient is unaware of its presence and remains undiagnosed till it progresses to advance stage. At the advanced stage, treatment is complicated and not very effective. Diabetic patients are therefore advised to undergo annual screening [15]. But screening requires several examinations and is costly. Earlier research in this field proved that analysis of CFI is the simplest way to detect DR. Apart from being cheap, they can be acquired and performed easily. So CFI is preferred over FA for large-scale screening purposes.

According to the medical definition, MAs are tiny swelling within the wall of a blood vessel (BV). It appears in the retinal capillaries as a small, round, red spot. And is commonly found in DR, retinal vein occlusion or absolute glaucoma [18]. MAs are usually 10 to 125 microns in size and are vision threatening if they occur into macula region of the retina.

Summary of Contribution

Objective of our research is to find MAs in CFIs and grade these fundus images for disease severity (Table 1). These identified images can then be referred to human experts for review, reducing their burden, examination time, and avoid further complications by giving them timely prompt treatment. To achieve this, two novel contributions are proposed in this paper. First, statistical geometrical features for discriminating MAs in CFI without segmenting BV is proposed. Second, to improve classification of MA and non-MA objects an object rule-based classifier has been proposed. Support vector machines was also been used for comparison and better results were obtained with proposed classifier. The proposed system attempts to present a simple, efficient, and real-time automated system for clinical use. The remaining paper is organized as “Related Work,” discusses state of art methods for MA detection in CFI and FA. In “Proposed Method,” proposed method based on statistics of geometrical properties for MA detection is presented. Experimental results are discussed in “Experimental Results and Analysis.” Finally, in “Conclusion,” conclusions are made.
Table 1

DR severity grading (Dupas et al. [7])

L0:

No DR (normal)

 

(MA = 0) AND (HM = 0)

L1:

Mild DR

 

(0 < MA <= 5) AND (HM = 0)

L2:

Moderate DR

 

((5 < MA < 15) OR (0 < HM < 5))

L3:

Severe DR

 

(MA >= 15) OR (HM >= 5)

Related Work

The very first attempt for detecting MA was in fluorescein angiograms (FA) by [3, 14]. As MAs have fairly uniform shape and size. These attempts were based on mathematical morphology. Spencer et al. [26] used shade correction and matched filtering technique for MA detection. The goal here was to distinguish MAs from elongated structures. Even tough MAs appears more contrasted with the background in FA; however, intravenous use of fluorescent dye used in FA has problems associated such as dark urination, pupil dilation, nausea etc., which persist for several days after the examination. These problems prohibited FA to be used for mass screening purpose.

From now on, all algorithms discussed in this paper are for CFI only. Walter et al. [27] used diameter criteria for detecting MA candidates. A supervised classifier based on density was used for MA classification. Fleming et al. [8] proved that contrast normalization can be used for differentiating MA from other artifacts. After comparing several normalization methods, watershed transform achieved best performance. Despite that, the system is complex as its outcome is based on training set cross-validation. Niemeijer et al. [20] used pixel classification technique for MA candidates extraction, the feature set of [26] was enhanced and k-NN classifier was used for MA recognition. Kande et al. [12] also proposed red lesion detection based on mathematical morphology and pixel classification. Red and green channel intensity information is taken into consideration and thresholds based on local relative entropy discriminates red lesions from matched filter response background image.

Apart from the abovementioned MA detection techniques, several other algorithms are proposed. Zhang et al. [29] used multiscale correlation filter(MSCF) and thresholding with a two-level architecture: coarse level and fine level. In the coarse level, MA candidates are detected. Fine-level true MA classified by extracting features from coarse level. Five different Gaussian kernel scales were applied to the CFI. For each feature, a corresponding threshold was then applied to decide MA candidates and then true MAs. But setting threshold, for all features, was a crucial task and required expert knowledge. So choosing the useful feature set for MA candidate classification needs to be considered.

Quellec et al. [24] presented wavelet template matching for MA detection. Non-uniform illumination and noise problems were effectively tackled by this method. Su et al. [28] proposed singular spectrum analysis(SSA) for locating MAs close to BV. MA candidates are extracted and cross-section profiles in 12 directions are considered for each of these MA candidates. For identifying true MA, SSA was used. Zhou et al. [30] presented an unsupervised classification method for MA detection based on sparse posterior cerebral artery, which does not consider non-MA training set. A new single T 2 statistic was used for discriminating true MAs and non-MA candidates automatically.

Although all these reported MA extraction techniques have some advantages associated but segments BV prior to MA candidate detection. The problems with these strategies are: The majority of the false positives (FP) in vessel segmentation are actually true lesions and once they are removed with BV they cannot be retrieved back. Secondly, they have difficulty in extracting MAs that are located close to BVs and discriminating MAs from vessel crossings and elongated HMs.

Main contribution in this paper is a new method consisting set of statistical and geometrical features for MA detection that does not need vessel segmentation. Proposed MA segmentation strategy is direct translation of medical definition of MA. MAs have fairly uniform shape and clear edges they seem as holes in edge detected image. These holes are filled by morphological reconstruction and subtracted it from original edge detected image. Statistical and geometrical features [5] are extracted for true MA classification.

Proposed Method

The proposed method involves following steps: preprocessing, MA candidate extraction, feature extraction, and object rule-based classification for true MA detection (Fig. 1).
Fig. 1

The proposed flow chart of an automated MA detection system

Preprocessing

Fundus image is a photograph of the inner eye. As per radiation transport model, [6, 22] when light traverses into the inner eye it gets reflected, absorbed, and transmitted. The amount of refraction, absorption, and transmission depends on melanin and hemoglobin concentration. Since absorption of blue light is more in the eye, its contribution to the fundus image color spectrum is very less. For green light absorption, coefficient is the highest in hemoglobin part of the spectrum. As a result, hemoglobin features of the eye have high absorption of green light than surrounding. Red light is least absorbed by inner eye pigments, thus makes the fundus images appear reddish. So in RGB images, the red channel is low-contrast saturated, blue channel has the poor dynamic range, and is noisy whereas green channel has high contrast to hemoglobin features like BV or red lesions (Fig. 2). Considering this fact green channel of RGB fundus image is considered for further processing as MAs are better visualized in green band as can be seen in Fig. 3b. For shade correction, the image is transformed by histogram equalization. To achieve this transformation histogram of green channel was equalized by operating on small regions called tiles. The contrast factor was set to 0.05 for preventing over-saturation of the image specifically in homogeneous areas [16]. As seen in Fig. 3c, MAs are more clearly visible in shade corrected image than original CFI Fig. 3a.

MA Candidate Extraction

The proposed MA segmentation algorithm is summarized in Algorithm 1.
MAs exhibits a Gaussian shape as seen in Fig. 2b. A Gaussian operator, is thus applied on the shade corrected image for smoothing.
$$ g(x, y,\sigma)=\frac{1}{2\pi\sigma^{2}}exp^{(-\frac{x^{2}+y^{2}}{2\sigma^{2}})} $$
(1)
Edge detection is performed on this shade corrected image. Sobel operator is used for edge detection. Choosing thresholds for edge detection was one of the most critical task. We experienced those threshold parameters which do not consider image characteristics and preprocessing methods does not yield satisfying results. So, thresholds were deliberately set to ensure that true edges are segmented. Two thresholds were set T 1 and T 2. Threshold values were varied and tested in order to access performance of proposed method. The thresholds are varied as follows:
$$T_{1}\in\lbrace {0.06, 0.07, 0.075, 0.08,0.09 }\rbrace $$
$$T_{2}\in \lbrace{0.15, 0.16, 0.175, 0.19, 0.20}\rbrace $$
A slight change in threshold was causing over segmentation or under segmentation problem. However, many spurious objects having similarity with MAs are also segmented. Experimentation showed the value of T 1 = 0.06 and T 2 = 0.16 gave a good balance between detected MAs and spurious objects. Segments with a small number of pixels or segments that are very thin were removed.
Fig. 2

Cross-section of intensity profile, a green channel view of MA. Pixels belonging to MA has dark intensity than background, b gray intensity profile

Considering the edge detected and filled-in image difference. The resulting binarized image is obtained by η(t h) [21].
$$ \eta(th)= Arg \{\max\limits_{1\leq th <L}[ {\sigma_{B}^{2}}/{{\sigma_{T}^{2}}}]\} $$
(2)
where \({\sigma _{B}^{2}}\) and \({\sigma _{T}^{2}}\) are between class variance and total variance, respectively. It is based on discriminant analysis which partitions resulting binarized image, of L gray levels, into two classes C 0 = {0,1,2,...t} and C 1 = {t + 1,t + 2,...L − 1}. C 0 and C 1 corresponds to the object and background, respectively, and probability of the two classes are \( w_{0}=\sum \limits _{i=0}^{t}p_{i} \) and \( w_{1}=\sum \limits _{i=t+1}^{L-1}p_{i} \), where p i = n i /n is probability of occurrence of gray level i. Also, the means of the two classes are \( \mu _{0}(t)=\sum \limits _{i=0}^{t}ip_{i}/w_{0}(t) \) and \( \mu _{1}(t)=\sum \limits _{i=t+1}^{L-1}ip_{i}/w_{1}(t) \). Now, let \({\sigma _{B}^{2}}(th)\) and \({\sigma _{T}^{2}}\) are between class variance and total variance, respectively. An optimal η(t h) can be computed by maximizing \({\sigma _{B}^{2}}\). And, \({\sigma _{B}^{2}}\) can be computed as w 0(μ 0μ T )2 + w 1(μ 1μ T )2, where μ T is total mean of whole image. \({\sigma _{B}^{2}}\) is identified as 0.498 in our study and hence η(t h).
Since MAs have clear edges, they appear as holes in the edge detected image and are filled by morphological reconstruction (Fig. 3d).
$$ z_{k}\gets (z_{k}-1 \oplus s)\cap f $$
(3)
where s- structuring element, an 3 × 3 matrix of ones, iterated until z k = z k − 1. Each candidate segmented from the filled out image is 8-connected and stored in the database for further feature extraction. Geometrical attributes are calculated for each connected objects and their non-compactness (irregularity) is statistically considered for segmenting each binary object.
Fig. 3

MA detection stages of proposed method. a Input image, b green channel image, c pre-processed image, d edge-detected image, e candidate MAs, and f True MAs

Feature Extraction

After image binarization left outs are candidate MAs, thin vessel fragments, and some noise/ other artifacts. MA and noise have an almost same area but the difference lies in shape. MAs are circular whereas noise is irregular elongated shape. Circularity measure can be used to discriminate between these two classes. MAs are compact and circular shaped whereas BV fragments are non-compact, elongated shaped. The classification errors arising from confusion between MA and other artifacts are minimized by these facts. Following five facts were considered for improving classification accuracy.
  • F 1 MAs are circular whereas BVs fragments are oblong.

  • F 2 MAs are compact whereas BVs fragments are non-compact.

  • F 3 MAs are typical < 125 μ m whereas BVs have a larger area.

  • F 4 MAs has unity (bounding box) aspect ratio whereas BVs have non-unity aspect ratio.

  • F 5 MAs being circular has low eccentricity.

During our research, we have asked retina experts how they recognize MAs in a fundus image so that our feature extraction will mimic retina experts. We found that retina experts give much importance to intensity, size, shape, and color features. Taking all these facts in consideration we propose following, set of relevant and significant features for extraction of MA candidates. For each feature, explanation of general motive, and its consequences on classification are explained here.
  1. A)

    Area (A k ), let Ω be a number of pixels in the region. The area is defined as number of pixels in the object and is given by: \(A_{k}={\sum \limits _{k=i}^{\Omega } O_{k}}\). Area feature eliminates false positives from MA candidates. True MAs area is 10 to 125 μ m. Objects with an area below 4 pixels and above 25 pixels are excluded.

     
  2. B)

    Eccentricity (e), given as the ratio of the longest chord(length) l c and longest perpendicular chord l p . Since MAs are circular its eccentricity should be zero or close to zero. e = (x2 − x1) + (y2 − y1)/l p .

     
  3. C)

    Perimeter P is the total number of objects pixels having one or more background pixels. P 8 = {(r,c) ∈ R|N 4(r,c) − Rϕ}, where P 8 is 8-connected perimeter.

     
  4. D)

    Compactness C, a measure of roundness, is given by C = P 2/4p i A k , Where P is perimeter and A is area of candidate region. True MAs are compact. For circular object it should be ≈ 1. During our research, we analyzed several MAs of various sizes and found that mostly they have compactness in the range of 1 to 0.85.

     
  5. E)

    Irregularity R, measure of non-compactness (irregularity) of candidate region, given as R = P 2/4p i A k . Higher this ratio, the object is more round. By this criterion, irregular shaped objects, and elongated objects, which show smaller values, i.e., closer to 0, are removed.

     
  6. F)

    Object length (l), is a scalar which specifies length (pixels) of ellipse major axis having same second central moments (normalized) as that of region being analyzed.

     
  7. G)

    Object width(w), minor axis length (pixels) of the ellipse with the same normalized second central moments as the region that is being analyzed.

     
  8. H)

    Aspect ratio s, is a measure of the relationship between bounding box dimensions of an object. s = l/w. For true MAs both dimensions should be same.

     
  9. I)

    Object intensity (I), the intensity of pixels within the object region. \( I~=~\sum \limits _{j\epsilon {\Omega }} {G^{o}}{j} \), where G o is green component of RGB image.

     
  10. J)

    Standard deviation, \(\sigma = \sqrt {\sum \limits _{j=0}^{L-1} (r_{j}-m)^{2}p(r_{j})}\), where m is mean gray value, representing average intensity.

     

This new feature set, based upon the statistical geometrical properties of connected components of the binary image is taken into consideration with an objective to improve the classification accuracy. Besides these ten features, we have tested many other features, such as homogeneity, skewness, solidity, etc. But, they did not improve performance significantly. Also, more features contribute to more confusion for classification and curse of dimensionality, so we aimed to have less confusion between similar classes. Even though proposed method has less number of features but they contain enough information for effectively detecting MAs.

Object Rule-Based Classification

The proposed object rule-based (ORB) classification system will help to make the decision for the referral. We have trained the data and devised rules from these training data. The classifier uses these learned rules to classify unseen data.

Notations used

Notations used in ORB classifier are as follows:
R-

Rules used in the classifier

R b -

Set of rules, rule base

D-

Training data

T-

Set of tuples

V-

Feature vector space

C-

Number of class labels in classifier

A k -

k-attributes used to describe sample data

p-

literal, an attribute-value pair

Every tuple t, in the set of tuples, have a form (A 1,A 2A k ). Thus, p is represented as an attribute-value pair (A i ,v),A i is an attribute and v is its associated value. A tuple t satisfies p = (A i ;v) iff t i = v, where t i is the value of the i t h attribute of t. So the rule R takes the form:
$$ R : \{ p_{1} \wedge p_{2} \wedge{\ldots} p_{n}\} \longrightarrow C $$
(4)
where p 1,p 2 are literals and C is class. These rules were used to extract true MAs from candidate MAs. Objects satisfying all the rules are considered are true MAs else they are considered as FPs.
In our research, we choose a set of rules R for D. If there is no rule that applies to the unseen case, it takes on default class, i.e, normal. One of the classification rules of proposed system is: high
$${(I=high)\wedge(s=unity) \wedge (S_{k} \leq \lambda)} \longrightarrow MA $$
where λ is 4 to 15 pixels in our case. Collection of such rules called rule base R b is used in object classification and is defined as follows: Collection of such rules called rule base R b is used in ORB classification and is defined as follows:
$$ R_{b}=\lbrace R_{1},R_{2},R_{3},....R_{7}\rbrace $$
(5)
  • R1 If compactness of object is close to 1, then object is likely to be an MA.

  • R2) If eccentricity of an object is more, then object is likely to be a BV.

  • R3) If intensity of object is high and is circular, then object is likely to be an MA.

  • R4) If the aspect ratio of an object is unity, then object is likely to be an MA.

  • R5) If object is circular and its perimeter is small, then object is likely to be an MA.

  • R6) If object intensity is high and aspect ratio is unity, then the object is likely to be an MA.

  • R7) If Intensity of object is high, aspect ratio of object is unity and area is λ, then object is likely to be an MA.

Accuracy of detecting MA is increased when more rules are satisfied by the object.

Experimental Results and Analysis

Datasets used

Performance of our method was evaluated on four datasets: DIARETDB1 [13], ROC [19], STARE [11], and Messidor [17]. Since images are of different characteristics while preprocessing, they were resized with bilinear interpolation.

STructured Analysis of the Retina (STARE) [11]

STARE database has ∼ 400 retinal images. Topcon TRV-50 fundus camera with 350 FOV was used for capturing images. Every image sizes 605 × 700 with 24 bits per pixel. All images are hand labeled by two experts. Expert annotations of the manifestations (features) visible in each image are tabulated in text files. BV segmentation work including 40 hand labeled images, their results, and a demo. MAs are also categorized as many, few, absent and unknown.

Standard Diabetic Retinopathy Database DIARETDB1 [13]

The database is of 89 CFIs out of which 5 are normal according to all experts called for evaluation and 84 images contain at least mild NPDR signs (MAs). Five delgrees of FOV was set for capturing fundus images of varied imaging settings. Each image size is 1500 × 1152 with 24 bits/ pixel. This data set is called “calibration level 1 fundus images.” The ground truth confidence levels < 50, 50, 100% represents the certainty of the decision about a marked finding is correct.

Methods to Evaluate Segmentation and Indexing Techniques (MESSIDOR) [17]

Messidor database with 1200 retinal images captured in 3 different ophthalmologist departments. Out of 1200, 800 images were acquired with pupil dilation and 400 without dilation. It is available publicly since 2008. Images were captured by Topcon TRC NW6 with a 45 FOV and packed in three sets. Each set has four zipped sub sets of 100 images each in TIFF format and excel file with medical diagnoses for each image. Reference standard is provided containing two diagnose: retinopathy grade and risk of macular edema.

Retinopathy Online Challenge (ROC) [19]

Retinopathy online challenge (ROC) database has 50 images in training and testing set each, selected from a large DR screening program. These DR screening programs was from multiple sites; used various cameras with varied field of view and image resolution for capturing fundus images (Table 2). Only training set of ROC has MA locations provided “ground truth.” Earlier, ROC used to have a multi-year challenge for MA detection; researchers were able to test their algorithms on ROC test dataset, but now this website is inactive [1]. So, we were unable to test our algorithm on the ROC test dataset. We have randomly selected images from training set of ROC here for testing and verifying our model.
Table 2

Characteristics of datasets used for evaluaion

Database

Image size

Field of view (FOV)

Image format

Total no. of imgs

No. of images with MA

Pupil dilated / non-dilated

No. of experts in annotation

Reference

DIARETDB1 [13]

1500 × 1152

500

png

89

84

Non-dilated

Four experts

Per lesion

ROC [19]

Multiple 768 × 576 to 1389 × 1383

450

jpeg

100

Not reported

Non-dilated

Three experts

Per-Lesion

Messidor [17]

1440 ×960, 2240 ×1488, 2304 × 1536

450

jpg

1200

226

800 dilated, 400 non-dilated

One expert

Per-image

STARE [11]

700 × 605

350

ppm

400

72

Not reported

Not reported

Per-image

Performance Evaluation

Proposed ORB classifier is trained on 50 images from DIARETDB1 [13] dataset whose ground truth is available, remaining 39 images of the same were used for testing. Both positive and negative samples are taken for training. A training set is established set for which feature values and its classification result is known. Each object is characterized by vector V in an n-dimensional feature space (n = 10 here). The last step is to decide its class whether a candidate belongs to ω 1(MA) or ω 2(non-MA) knowing its representation in train.
$$ TRAIN= \{ ({V}_{i}, {\omega_{k}^{i}}) \vert i= 1.....N \qquad K=1,2 \} $$
(6)
In final classification rule base R b is applied to detect true MAs from unseen data. The DIARETDB1 [13], ROC [19], STARE [11], and Messidor [17] dataset images were then used for testing.
This made us evaluate our method on images with different characteristics which are listed in Table 2. As images were from different datasets, image sizes are also different. We have re-sized images to 576 × 750 for analyzing computational efficiency. We have evaluated our method on lesion level detection and image level detection. Later evaluation results were more promising. Table 3 shows the performance of proposed MA method with other existing methods. None of these were compared with the same dataset, commonly the algorithm performance was reported by receiver operating characteristic on a per lesion basis [2, 23] and per image basis. In receiver operating characteristic analysis, the true positive rate is plotted against the false positive rate.
Table 3

Performance of proposed system compared with existing methods

Author

Dataset used

No of images used

Sensitivity (%)

Specificity (%)

Adal et al. [2]

DIARETDB1, ROC, UTHSC private dataset

380

64.62

92.31

Purwita et al. [23]

DIARETDB1

89

NR

NR

Zhang et al. [29]

DIARETDB1, ROC

11, 50

71.30

NR

Sopharak et al. [25]

Non-dilated dataset

80

85.68

99.99

Bhalerao et al. [4]

DIARETDB1

89

82.6

80.2

Proposed method

DIARETDB1, ROC, MESSIDOR, STARE

89, 50, 1200, 400

88.09

92.65

NR not reported

Per Lesion Evaluation

Generally, quantification consists of counting specific type of objects or lesions like MAs. This lesion count predicts the severity of disease. Number of MAs and associated disease severity is tabulated in Table 1. Here each pixel is classified as MA or non-MA. The proposed method counts the number of MAs detected and grade disease severity as per Table 1. The grade will help to make decision for referral.

From used dataset, only the DIARETDB1 dataset has a per-lesion reference standard. The proposed method achieves a sensitivity of 84.15% at a specificity of 93.50% on DIARETDB1 dataset. Even if MA detection is inaccurate in certain cases, the specificity value is quite high. Table 4 shows per lesion evaluation of proposed method on DIARETDB1 dataset. One-sided confidence intervals were calculated for each lesion Fig. 4 shows confidence interval of lesions identified by proposed method when compared to the reference. The performance of proposed system is also evaluated by plotting true positive rate against the false positives per image seen in Fig. 5 as a dotted curve.
Fig. 4

a True MA detected by proposed method. Highlighted to better visualize confidence interval, b Ground truth image (image019) from DIARETDB1

Fig. 5

Performance Evaluation on DIARETDB1 dataset

Per Image Evaluation

At least one MA in retinal fundus image is considered as a sign of DR, while absence indicates healthy retina. An image is graded as No DR, mild, and severe DR as per Table 1. Proposed method image level detection results are better compared to lesion-level as we are only grading them. The ability of our method to detect DR images and separate them from healthy ones on the basis of the presence or absence of MAs is quite satisfactory. Table 4 shows per image evaluation of proposed method on used datasets. When classifying DR images, the proposed method obtained a sensitivity of 88.09% at a specificity of 92.65%.
Table 4

Per image and per lesion evaluation results of proposed method on used datasets

Dataset used

No. images in dataset

No. images with MA

Per-lesion evaluation

Per image evaluation

   

Sensitivity (%)

Specificity (%)

No. images correctly classified

Sensitivity (%)

Specificity (%)

DIARETDB1

89

84

84.15

93.50

74

88.09

92.65

ROC

50 a

37

82.06

91.93

43

86.00

91.58

Messidor

1200

226

NA

NA

1021

85.08

89.56

STARE

387 b

72

NA

NA

316

81.65

89.80

a Training set only

b Known

Computational Complexity

Let I = [i x y ] x×y be an image with intensity value at pixel, (x, y), i x y ∈ G=0, 1, ..., L-1. The time and space complexity associated with processing I (segmentation, edge detection, feature extraction, etc.) increases with X, Y, or L. Let r 1,r 2,...,r p be p objects, each r i having i n pixels and r N be detected binary objects.

Computational complexity of proposed method is O(n) where n = x × y, total number of pixels in image I. Therefore, the complexity of proposed method is O(n + k 2), where k is the total number of objects detected. As k 2n, the computational complexity of proposed method is normally O(n). Complexity can be further reduced by carefully eliminating redundant features or reducing number of features. The major challenge lies in searching feature subset which enhances performance of classifier by removing redundant and unnecessary features. The efficacy of feature selection technique is usually assessed by performance of trained model with feature subset. The proposed ORB classifier’s performance on trained data with selected 10 feature subset is shown by area under the ROC curve (AUC) in Fig. 5.

We have implemented proposed algorithm in Matlab 2016a on core i7, 3.40 GHz, without doing any parallel computing. Computation time of proposed algorithm takes approximately 0.6 s for smaller STARE images after re-sizing and maximum 3 s for largest Messidor images without re-sizing. On average, it takes 0.82 s to process one image. The results of the proposed system on four publicly available datasets demonstrate that the best performance is achieved when images of the same dataset are used for training and testing.

We have compared proposed ORB classifier with support vector machines, a machine learning algorithm as well (Table 5). The SVM kernel is \( K(x,x^{\prime })= exp\langle \frac {\|{x-x^{\prime }}||^{2}}{-2\sigma ^{2}}\rangle \), where K(x,x’) is the squared Euclidean distance between two feature vectors.
Table 5

Lesion level performance of classifiers on DIARETDB1 dataset

Classifier

Sensitivity(%)

Specificity(%)

Accuracy(%)

Proposed ORBC

84.15

93.50

84.15

Support vector machine

82.10

89.12

82.10

Conclusion

In this paper, we have presented a novel method for MA detection in CFI based on statistics of geometric features and object rule-based classification eliminating BV extraction step. We have evaluated our method on for publicly available standard data-sets: DIARETDB1, ROC MESSIDOR & STARE. Results demonstrate that proposed automated MA detector has been found robust with respect to different imaging modalities. Proposed method achieved competitive results in detecting MAs. Despite these results are not optimum, they are encouraging and reveal that some improvements may be done for detecting HMs as well. Currently, we are working on a novel machine learning decision based method for feature selection which discriminates MAs efficiently in real-time.

References

  1. 1.
    Abramoff MD: Retinopathy online challenge roc @ONLINE. 2017Google Scholar
  2. 2.
    Adal KM, Sidibé D, Ali S, Chaum E, Karnowski TP, Mériaudeau F: Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning. Comput Methods Prog Biomed 114(1):1–10, 2014CrossRefGoogle Scholar
  3. 3.
    Baudoin CE, Lay BJ, Klein JC: Automatic detection of microaneurysms in diabetic fluorescein angiography. Revue d’é,pidémiologie et de santé publique 32(3–4):254–261, 1983Google Scholar
  4. 4.
    Bhalerao A, Patanaik A, Anand S, Saravanan P: Robust detection of microaneurysms for sight threatening retinopathy screening. In: 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing ICVGIP’08. IEEE, 2008, pp 520–527.Google Scholar
  5. 5.
    Chen YQ, Nixon MS, Thomas DW: Statistical geometrical features for texture classification. Pattern Recogn 28(4):537–552, 1995CrossRefGoogle Scholar
  6. 6.
    Delori FC, Pflibsen KP: Spectral reflectance of the human ocular fundus. Appl Opt 28(6):1061–1077, 1989CrossRefPubMedGoogle Scholar
  7. 7.
    Dupas B, Walter T, Erginay A, Ordonez R, Deb-Joardar N, Gain P, Klein J-C, Massin P: Evaluation of automated fundus photograph analysis algorithms for detecting microaneurysms, haemorrhages and exuyears, and of a computer-assisted diagnostic system for grading diabetic retinopathy. Diabetes Metab 36(3): 213–220, 2010CrossRefPubMedGoogle Scholar
  8. 8.
    Fleming AD, Philip S, Goatman KA, Olson JA, Sharp PF: Automated microaneurysm detection using local contrast normalization and local vessel detection. IEEE Trans Med Imaging 25(9):1223–1232, 2006CrossRefPubMedGoogle Scholar
  9. 9.
    Gan D: Diabetes atlas. International Diabetes Federation. 2003Google Scholar
  10. 10.
    Hassan SSA, Bong DBL, Premsenthil M: Detection of neovascularization in diabetic retinopathy. J Digit Imaging 25(3):437–444, 2012CrossRefPubMedGoogle Scholar
  11. 11.
    Hoover A: Stare database Available: http://cecas.clemson.edu/ahoover/stare/. 1975
  12. 12.
    Kande GB, Satya Savithri T, Venkata Subbaiah P: Automatic detection of microaneurysms and hemorrhages in digital fundus images. J Digit Imaging 23(4):430–437, 2010CrossRefPubMedGoogle Scholar
  13. 13.
    Kauppi T, Kalesnykiene V, Kamarainen J-K, Lensu L, Sorri I, Raninen A, Voutilainen R, Uusitalo H, Kälviäinen H, Pietilä J: The diaretdb1 diabetic retinopathy database and evaluation protocol. In: BMVC, 2007, pp 1–10.Google Scholar
  14. 14.
    Lay B, Baudoin C, Klein J-C: Automatic detection of microaneurysms in retinopathy fluoro-angiogram. In: 27th Annual Techincal Symposium, International Society for Optics and Photonics, 1984, pp 165–173.Google Scholar
  15. 15.
    Lee SC , Wang Y, Lee ET: Computer algorithm for automated detection and quantification of microaneurysms and hemorrhages (hmas) in color retinal images. In: Medical Imaging’99, International Society for Optics and Photonics, 1999, pp 61–71.Google Scholar
  16. 16.
    Manjaramkar A, Kokare M: A rule based expert system for microaneurysm detection in digital fundus images. 2016 international conference on computational techniques in information and communication technologies (ICCTICT) . IEEE; 2016. p. 137–140.Google Scholar
  17. 17.
    TECHNO-VISION MESSIDOR. Messidor: methods to evaluate segmentation and indexing techniques in the field of retinal ophthalmology. 2014. Available on: http://messidor.crihan.fr/index-en.php Accessed: October, 9, 2014
  18. 18.
    Millodot M: Dictionary of optometry and visual science. Elsevier Health Sciences. 2014Google Scholar
  19. 19.
    Niemeijer M, Van Ginneken B, Cree MJ , Mizutani A, Quellec G, Sánchez CI , Zhang B, Hornero R, Lamard M, Muramatsu C, et al.: Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs. IEEE Trans Med Imaging 29(1):185–195, 2010CrossRefPubMedGoogle Scholar
  20. 20.
    Niemeijer M, Ginneken BV, Staal J, Suttorp-Schulten MSA, Abràmoff MD: Automatic detection of red lesions in digital color fundus photographs. IEEE Trans Med Imaging 24(5):584–592, 2005CrossRefPubMedGoogle Scholar
  21. 21.
    Otsu N: A threshold selection method from gray-level histograms. Automatica 11(285-296):23–27, 1975Google Scholar
  22. 22.
    Preece SJ, Claridge E: Monte carlo modelling of the spectral reflectance of the human eye. Phys Med Biol 47(16):2863, 2002CrossRefPubMedGoogle Scholar
  23. 23.
    Purwita AA, Adityowibowo K , Dameitry A, Atman MWS: Automated microaneurysm detection using mathematical morphology. 2011 2nd International Conference on Instrumentation, Communications, Information Technology, and Biomedical Engineering (ICICI-BME). IEEE, 2011, pp 117–120.Google Scholar
  24. 24.
    Quellec G, Lamard M, Josselin PM, Cazuguel G, Cochener B, Roux C: Optimal wavelet transform for the detection of microaneurysms in retina photographs. IEEE Trans Med Imaging 27(9):1230–1241, 2008CrossRefPubMedPubMedCentralGoogle Scholar
  25. 25.
    Sopharak A, Uyyanonvara B, Barman S: Simple hybrid method for fine microaneurysm detection from non-dilated diabetic retinopathy retinal images. Comput Med Imaging Graph 37(5):394–402, 2013CrossRefPubMedGoogle Scholar
  26. 26.
    Spencer T, Olson JA, McHardy KC, Sharp PF, Forrester JV: An image-processing strategy for the segmentation and quantification of microaneurysms in fluorescein angiograms of the ocular fundus. Comput Biomed Res 29(4):284–302, 1996CrossRefPubMedGoogle Scholar
  27. 27.
    Walter T, Massin P, Erginay A, Ordonez R, Jeulin C, Klein J-C: Automatic detection of microaneurysms in color fundus images. Med Image Anal 11(6):555–566, 2007CrossRefPubMedGoogle Scholar
  28. 28.
    Su W, Tang HL, Hu Y, Sanei S, Saleh GM, Peto T, et al.: Localising microaneurysms in fundus images through singular spectrum analysis. IEEE Transactions on Biomedical Engineering. 2016Google Scholar
  29. 29.
    Zhang B, Wu X, You J, Li Q, Karray F: Detection of microaneurysms using multi-scale correlation coefficients. Pattern Recogn 43(6):2237–2248, 2010CrossRefGoogle Scholar
  30. 30.
    Zhou W, Wu C, Chen D, Yi Y, Du W: Automatic microaneurysm detection using the sparse principal component analysis-based unsupervised classification method. IEEE Access 5:2563–2572, 2017CrossRefGoogle Scholar

Copyright information

© Society for Imaging Informatics in Medicine 2017

Authors and Affiliations

  1. 1.Department of Information TechnologySGGS Institute of Engineering & TechnologyNandedIndia
  2. 2.Department of Electronics & TelecommunicationSGGS Institute of Engineering & TechnologyNandedIndia

Personalised recommendations