A Novel Adaptive Deformable Model for Automated Optic Disc and Cup Segmentation to Aid Glaucoma Diagnosis
 1.1k Downloads
 3 Citations
Abstract
This paper proposes a novel Adaptive Regionbased Edge Smoothing Model (ARESM) for automatic boundary detection of optic disc and cup to aid automatic glaucoma diagnosis. The novelty of our approach consists of two aspects: 1) automatic detection of initial optimum object boundary based on a Region Classification Model (RCM) in a pixellevel multidimensional feature space; 2) an Adaptive Edge Smoothing Update model (AESU) of contour points (e.g. misclassified or irregular points) based on iterative force field calculations with contours obtained from the RCM by minimising energy function (an approach that does not require predefined geometric templates to guide autosegmentation). Such an approach provides robustness in capturing a range of variations and shapes. We have conducted a comprehensive comparison between our approach and the stateoftheart existing deformable models and validated it with publicly available datasets. The experimental evaluation shows that the proposed approach significantly outperforms existing methods. The generality of the proposed approach will enable segmentation and detection of other object boundaries and provide added value in the field of medical image processing and analysis.
Keywords
Medical image processing and analysis Machine learning Computeraided retinal disease diagnosis GlaucomaIntroduction and background
Some research efforts have been made on automatic segmentation of the optic disc or optic cup [4, 5, 6, 7, 8, 9, 10, 11]. These efforts can be broadly divided into two categories: nonmodelbased and modelbased approaches. In the nonmodelbased approaches [12, 13, 14, 15] the boundary detection mainly uses some algorithms such as thresholding, morphological operations, pixel clustering etc. However, the identification process of the optic disc boundary had some problems due to obscuration by bridging retinal vessels. The modelbased approaches can be classified into shapebased template matching and deformable modelling approaches. The template matching focuses on incorporating the shape of the object and its greylevel appearance in an image by matching optic disc edge with a circular [16, 17] or elliptical shape [18]. But these methods suffer from inaccuracy since they often fails to detect shape irregularity due to the shape variation of an object. The deformable modelling can be further classified into either freeform or parametric deformable models. The freeform deformable models have no global structure of the template, which can freely deform the shape of an object. Typical examples of freeform deformable models include the active contour or ‘snakes’ model (ACM), levelset model [19, 20, 21] and ChanVese (CV) level set model [22] (a type of ACM model without edge) and their variations [23, 24, 25, 26, 27, 28]. The parametric deformable models involve an offline training process to determine a shape model parameterising diverse shape characteristics, such as Active Shape Model (ASM) [29, 30, 31]. The ASMbased approach refers to shape approximation of an object using statistical approaches to learn the boundary shape of the optic disc from a training set [32, 33, 34].
The aforementioned deformable models have shown promise for segmentation of the optic disc or cup. However, there are still challenges related to achieving high accuracy in optic disc and cup segmentation. For instance, in some cases, the optic disc might not have a distinct edge due to disc tilting or peripapillary atrophy (PPA) and disc vessels could misguide the segmentation. The ASM does not adequately segment optic discs with PPA [34]. Additionally, since pathological changes may arbitrarily deform the shape of the optic disc and also distort the course of blood vessels, the existing ASMbased approaches fail to accurately extract the object boundary with variation and irregularity and are influenced by blood vessel obscuration. The modified ACM approach as proposed by Xu and colleagues [35] have addressed optic disc segmentation problems due to vasculature occlusion and PPA by adjusting the uncertain cluster points of the contour. Nevertheless, they have also observed optic disc segmentation failures due to retinal atrophy or bright retinal lesions. Furthermore, the method is dependent on initialisation parameters as it relies on local gradient information only. The accuracy of the ChanVese (CV) level set model is dependent on the initialisation parameters as well. Despite the application of gabor filters at different frequencies used after vasculature removal to reduce the PPA occlusion [23], the filtering parameters may need to be modified manually for better accuracy. Vasculature removal with morphological filtering can also diminish the optic disc edges which can affect the segmentation accuracy.
 1.
A new Region Classification Model (RCM) which identifies the initial optimum contour approximation representing optic disc or cup boundary between inside and outside a Region of Interest (ROI) based on pixelwise classification in a multidimensional feature space (with features extracted on the pixel level). The multidimensional feature space represents the local textural, gradient and frequency based information which is used as input for training a backpropagation Neural Networks classification model of an optimum contour approximation.
 2.
A new Adaptive Edge Smoothing Update model (AESU) for contour regularisation and smoothing update based on iterative force field calculations with any contour obtained from the RCM.
The rest of this paper is organised as follows: “The proposed adaptive regionbased edge smoothing deformable model” details the proposed model; “Experimental evaluation” describes datasets used, evaluation metrics and experimental results; “Conclusion” concludes the work and highlights future work.
The proposed adaptive regionbased edge smoothing deformable model
The rationale
Different from the existing approaches, the proposed method extracts the feature information at the pixellevel, identifies the initial optimum contour based on regional classification, and dynamically updates the contour without requiring a predefined template such as a circular or elliptical shape. The model uses only once the mean of shapes in the training set as an initial parameter. The model will then search the optimum contour points based on the RCM classification, which is not necessarily identical with the landmarks of the mean shape and may dynamically change during the search. The shape irregularities and misclassified contour points are then dynamically updated and smoothed by the AESU model through the minimisation of energy function based on the force field direction. The proposed approach can capture shape variations and irregularity very well, for example, in the presence of PPA or vasculature occlusion.
Pixelbased image representation in multidimensional feature space
Summary of Gaussian and Gaussian derived features
Feature name  Equation  Feature name  Equation 

Gaussian filter  \(\mathcal {N}(\sigma ,i_{x},j_{y}) = \frac {1}{2\pi \sigma ^{2}}e^{\frac {{i_{x}}^{2} + {j_{y}}^{2}}{2\sigma ^{2}}}\)  Gammanormalised derivative  \(L_{pp,\gamma norm} = \frac {\sigma ^{\gamma }}{2}(\mathcal {N}_{xx}+\mathcal {N}_{yy} \sqrt {{(\mathcal {N}_{xx} \mathcal {N}_{yy})}^{2}+ 4{\mathcal {N}_{xy}}^{2}})\) (\(\frac {\sigma ^{\gamma }}{2}\) is normalisation factor with \(\gamma =\frac {3}{2}\)) 
Dyadic Gaussian  \(I_{mn} = \frac {(R+G)}{2}\) Y _{ r g } = R + G − 2R − G  \(L_{qq,\gamma norm} = \frac {\sigma ^{\gamma }}{2}(\mathcal {N}_{yy}+ \mathcal {N}_{yy} + \sqrt {{(\mathcal {N}_{xx}\mathcal {N}_{yy})}^{2}+ 4{\mathcal {N}_{xy}}^{2}})\)  
I _{ m n }(c,s) = I _{ m n }(c) − I n t e r p _{ s−c } I _{ m n }(s)  Differential Geometric Edge Definition  \(L_{uu} = \mathcal {N}_{xx}+\mathcal {N}_{yy}\)  
R G(c,s) = (R(c) − G(c))  \(L_{u,u} = {\mathcal {N}_{x}}^{2}+{\mathcal {N}_{y}}^{2}\)  
− I n t e r p _{ s−c }(R(s) − G(s))  
Y _{ r g }(c,s) = (Y _{ r g }(c)) − I n t e r p _{ s−c }(Y _{ r g }(s))  \(L_{uv} = {\mathcal {N}_{x}}^{2}\mathcal {N}_{xx} + 2\mathcal {N}_{xy}\mathcal {N}_{x} \mathcal {N}_{y}+{\mathcal {N}_{y}}^{2}\mathcal {N}_{yy} \)(u,v) are local coordinate system [39]  
Gabor  \(Gb(x,y,\gamma ,\lambda ,\sigma ,\theta ) = \exp (\frac {1}{2}(\frac {{\hat {x}^{2}}}{\sigma ^{2}}+\frac {\hat {y}^{2}\gamma ^{2}}{\sigma ^{2}})* \)  Difference of  \({\Gamma }_{\sigma _{1},\sigma _{2}}(x,y)\) = \(\frac {1}{\sigma _{1} \sqrt {2\pi }}e^{\frac {x^{2}+y^{2}}{2{\sigma _{1}^{2}}}}\)\(\frac {1}{\sigma _{2}\sqrt {2\pi }}e^{\frac {x^{2}+y^{2}}{2{\sigma _{2}^{2}}}}\) 
\(\exp (\frac {i2\pi x}{\lambda })\)  Gaussian (DOG)  
\(\hat {x} = x cos\theta + y sin\theta \qquad \hat {y} = y cos\theta  x sin\theta \) 
Feature extraction and selection
The feature space after applying the different filters above on the pixellevel is of high dimensionality. To obtain the most relevant features for the respective classification, we have adopted iterative sequential maximisation of task performance (also called the ‘wrapper feature selection’ [41]) in which initially the data is divided into k folds (in our case k = 5). Then the first feature is selected which has maximum mean classification performance across the folds. During subsequent iterations, the features together with previously selected features resulting in the highest mean classification performance are selected. This process continues until there is little or no maximisation (less than 0.01) towards classification performance. For the quantification of classification performance, we have certain performance measures such as Area Under the Curve (AUC), Linear Discriminant Analysis (LDA) accuracy and Quadratic Discriminant Analysis (QDA) accuracy [42].
Region classification model for initial optimum contour approximation
The RCM consists of two main steps: 1) Initialisation of shape or contour profile; 2) Contour profile optimisation.
Initialisation of shape or contour profile
Contour profile optimisation
Adaptive edge smoothing update
ARESM application to segmentation of optic disc and cup
Optic disc segmentation
Optic cup segmentation
Compared to the optic disc segmentation, the optic cup segmentation is a more challenging task for two reasons: a) there is no clear or distinct boundary between optic cup and rim of optic disc and b) vasculature occlusion. We have adapted our proposed approach to optic cup segmentation in two ways: a) the use of the prior knowledge in the RCM for accurate detection of optic cup by adding an additional feature, namely distance maps; b) adaptive smoothing update using the weighted features to make the optic cup force field more influential and enhance the region of interest (ROI) by performing multiplication between the external energy function and the weighted features from the RCM.
Experimental evaluation
Evaluation metrics
where C D R _{ c } is CDR from clinical annotations and C D R _{ s } is the CDR from segmentation results. The CDR values have been evaluated in vertical, horizontal meridian as well as ratio of area covered by optic cup to the area covered by optic disc.
Datasets
To validate the proposed approach, we have used two publicly available datasets: RIMONE [47] and DrishtiGS datasets [48]. The datasets used in the experiments include retinal images consisting of normal, glaucoma and glaucoma suspect images. The number of images in total is 209.
Interobserver variability in the datasets
RIMONE (1 vs 2)  DrishtiGS  

Image Type  Optic Disc  Optic Cup  Expert X vs Expert Y  Optic disc  Optic cup 
Normal images  4.5% ± 2.07%  6.93% ± 2.22%  1 vs 2  1.00% ± 0.39%  1.47% ± 0.83% 
Glaucoma images  5.01% ± 3.15%  7.31% ± 3.81%  1 vs 3  1.87% ± 0.61%  3.07% ± 1.57% 
All images  4.74% ± 2.63%  7.11% ± 3.06%  1 vs 4  2.99% ± 1.35%  5.31% ± 2.10% 
2 vs 3  0.84% ± 0.27%  1.57% ± 0.94%  
2 vs 4  1.96% ± 1.20%  3.81% ± 1.61%  
3 vs 4  1.09% ± 1.02%  2.22% ± 1.25% 
Average CDR values (vertical, horizontal and area) in the RIMONE and Drishti datasets
RIMONE  DrishtiGS  

CDR Type  Normal  Glaucoma  Both  
Vertical  0.42 ± 0.10  0.60 ± 0.17  0.50 ± 0.16  0.69 ± 0.13 
Horizontal  0.40 ± 0.11  0.57 ± 0.16  0.48 ± 0.16  0.70 ± 0.14 
Area  0.18 ± 0.09  0.37 ± 0.19  0.27 ± 0.17  0.51 ± 0.18 
Model parameterisation
The ARESM requires parameterisation for accurate boundary detection and segmentation, which are related to several factors: 1) whether vasculature removal is required before feature selection; 2) feature selection; 3) training protocol and 4) classifier and contour profile optimisation parameters. The following sections details the approach of model parameterisation.
Determination on whether vasculature removal is required
The ICPs have been measured in terms of AUC value for individual features. The results show that for both optic disc and optic cup, the features calculated with vasculature removal have higher ICPs compared to those calculated with and without vasculature removal. Therefore we have determined the feature set on those after vasculature removal.
Feature selection
Feature symbols for each feature set obtained by sequential AUC maximisation for optic disc and optic cup region determination
Optic disc  Optic cup 

\(\mathcal {N}_{R}(16)\), I _{ m n }(4, 8),  \(\mathcal {N}_{R}(16)\), \( \mathcal {N}_{xxR}(16)\), 
B Y (4, 8), L _{ u,u G }(8),  Γ_{16,4,G }, L _{ u,u G }(8), 
\(\mathcal {N}_{G}(16)\), L _{ u u G }(16),  B Y (4, 7),I _{ m n }(4, 8), 
L _{ q q,γ−n o r m R }(16),  L _{ u v,v u G }(16), \(\mathcal {N}_{G}(16)\), 
I _{ m n }(4, 7), B Y (4, 7),  I _{ m n }(4, 7), I _{ m n }(3, 7), \(\mathcal {N}_{R}(8)\), 
L _{ u v R }(4), L _{ u v,v u G }(2),  \(\mathcal {N}_{G}(16)\), \(\mathcal {N}_{yG}(8)\), 
L _{ u u G }(2), L _{ u v G }(4),  \(\mathcal {N}_{yyG}(16)\), \(\mathcal {N}_{xxR}(8)\), 
L _{ u u R }(16), L _{ u v,v u R }(16),  L _{ q q,γ−n o r m G }(8), 
L _{ u v,v u G }(16), Γ_{8,4,R },  R G(4, 8), \(\mathcal {N}_{xxG}(16)\), 
L _{ q q,γ−n o r m G }(16), Γ_{4,2,G },  Γ_{16,4,G } 
L _{ u v,v u R }(8) 
Training protocol
The training was performed on the RIMONE dataset with 2fold cross validation i.e. training one part whilst testing the other. The Drishti dataset has been tested on the model built upon the RIMONE dataset.
As far as training parameters are concerned in “Contour profile optimisation”, we have trained a single layer backpropagation neural networks with the number of input and hidden neurons equal to the number of selected features (20 for optic disc and 25 for optic cup). However, the hidden neurons vary from 15 to 30 which will give a similar result. Other parameters include n _{ p } = 25 − 35, m = 7 − 10, n = 200 (optic disc), 100 (optic cup).
Accuracy comparison with stateoftheart approaches

Accuracy performance comparison with the previous approaches.

Accuracy performance comparison based on CDR values (CupDiscRatio).
Optic Disc Segmentation Accuracy Comparison
Comparison of optic disc segmentation results  mean and standard deviations of dice coefficient
RIMONE  DrishtiGS  

Normal  Glaucoma  All  
ARESM  0.92 ± 0.06  0.90 ± 0.07  0.91 ± 0.07  0.95 ± 0.02 
ASM model  0.85 ± 0.10  0.77 ± 0.16  0.76 ± 0.13  0.87 ± 0.06 
ACM model  0.86 ± 0.07  0.85 ± 0.09  0.86 ± 0.08  0.91 ± 0.03 
CV model  0.88 ± 0.13  0.86 ± 0.14  0.87 ± 0.14  0.85 ± 0.11 
Based on the results, our proposed approach yield the highest Dice Coefficients in comparison with the existing models. Figure 13c shows the examples of visual results. The ASM modelbased segmentation is misguided as the mean of the shape in the training set keeps constant (once it is trained). Figure 13d shows the examples of visual results. The ACM and CV modelbased segmentations are comparable to our approach on these images. Nevertheless their performance is uncertain as presented in other examples. The ACM is dependent on the strong edges of the optic disc which might not be the case in every example. The examples of visual results are shown in Fig. 13e. Optic disc margin segmentation errors can occur using the Chan Vese model when PPA is present (Fig. 13f). For RIMOne dataset, the average accuracy of optic disc segmentation is 91%. The accuracy of DrishtiGS is 95%.
Optic cup segmentation accuracy comparison
Comparison of optic cup segmentation results  mean and standard deviations of dice coefficient
RIMONE  DrishtiGS  

Normal  Glaucoma  All  
ARESM  0.91 ± 0.06  0.89 ± 0.06  0.89 ± 0.06  0.81 ± 0.10 
ASM  0.78 ± 0.09  0.73 ± 0.13  0.76 ± 0.12  0.72 ± 0.14 
ACM  0.76 ± 0.10  0.81 ± 0.09  0.79 ± 0.10  0.71 ± 0.12 
CV  0.71 ± 0.18  0.73 ± 0.17  0.72 ± 0.18  0.80 ± 0.08 
The results suggest that the proposed method has achieved high accuracy, in comparing with the existing approaches. The reason is that optic cup segmentation has been highly dependent on the accurate optic disc segmentation. Moreover, the algorithms like ACM and CV are not converged in the case of cup segmentation. The ASM produces annotation failures due to vasculature occlusion which is not the case in the ARESM. The ARESM method does fail in some of the normal images due to their very small size of optic cup. Nevertheless, the cup segmentation has an average accuracy of 89% for RIMONE and 81% for DrishtiGS database images.
Accuracy Comparison based on CDR
CDR is an important indicators related to glaucoma diagnosis. Accurate automated CDR assessment requires precise assessment of the cup and disc margins compared to an acceptable reference standard.
In order to compare the clinical manual CDR and automatically determined CDR, we have used the RIMONE dataset in which both the optic disc and optic cup have been manually annotated by clinicians. Hence the manual CDR values can be calculated from these two datasets in the a) vertical meridian, b) horizontal meridian; and c) area. However, CDR in the vertical meridian is used most commonly by clinicians in optic nerve evaluation for glaucoma [1].
Both datasets contain three types of images: normal, glaucoma and glaucoma suspect images. Therefore, we have evaluated the CDRs (i.e. vertical CDR, horizontal CDR and area CDR) in two sets 1) normal (N) vs glaucoma (G) and 2) normal (N) vs glaucoma and suspects (G + S). The RIMONE dataset has 85 normal (N), 39 glaucoma (G) and 35 glaucoma suspect (S) images.
Comparison between our proposed approach (ARESM), the clinical manual CDR and the other existing methods in terms of mean CDR error and p values of the paired ttest which shows the comparison between ROC curves generated by manual CDRs and the CDRs from automatic methods
Vertical CDR  Horizontal CDR  Area CDR  

N  G  G+S  All  pvalue (N vs. G)  N  G  G+S  All  pvalue (N vs. G)  N  G  G+S  All  pvalue (N vs. G)  
ARESM  0.08  0.11  0.08  0.07  0.05  0.08  0.10  0.08  0.07  0.16  0.05  0.11  0.08  0.06  0.02 
ASM  0.22  0.21  0.21  0.22  < 0.0001  0.31  0.29  0.27  0.29  0.05  0.26  0.33  0.29  0.27  < 0.0001 
ACM  0.20  0.13  0.13  0.17  < 0.0001  0.20  0.12  0.12  0.16  < 0.0001  0.19  0.14  0.13  0.16  < 0.0001 
CV  0.13  0.14  0.14  0.13  < 0.0001  0.12  0.13  0.12  0.12  < 0.0001  0.09  0.14  0.12  0.11  < 0.0001 
Conclusion

We have developed the Region Classification Model (RCM) which identifies the initial optimum contour approximation representing optic disc or cup boundary between inside and outside region of interest based on pixelwise classification in a multidimensional feature space, and performs region search for optimum contour profile. This is different from the existing models such as the conventional ASM model where the contour is static once it has been trained from the training set. Our model can dynamically search the region and obtain the most optimum contour.

To overcome misclassification and irregularity of contour points, we have proposed the Adaptive Edge Smoothing Update model (AESU) which can dynamically smooth and update the irregularities and misclassified points by minimising the energy function according to the force field direction in an iterative manner. Our model does not require a predefined template such as a circle or an ellipse. It could be any contour generated from the RCM model. This is different from the existing approaches which used a circular or ellipse fitting for smoothing update.
We have applied our approach to both optic disc and optic cup segmentations. We have conducted a comprehensive comparison with the existing approaches such as ASM, ACM, and ChanVese (CV) models. The approaches were validated with two publicly available data sets: RIMONE and DrishtiGS.
For optic disc segmentation of RIMOne dataset, the average accuracy of optic disc segmentation is 91%. The accuracy of DrishtiGS is 95%. It should be noted that the proposed approach works well on high resolution images of RIMONE since the low resolution images have blurred vasculature and optic disc edges. For optic cup segmentation, the average accuracy of optic cup segmentation is 89% for RIMONE and 81% for DrishtiGS databases. The optic cup segmentation results are highly dependent on the accurate segmentation of the optic disc segmentation. Moreover, failed cases of optic cup segmentation include the normal images which have very small cup size. Future work will focus on more accurate segmentation of the small cup.
Based on the rationale outlined here, our proposed approach can also be applied to boundary detection of other objects. Future work is needed to apply this algorithm to other object boundary detections and also improve accuracy of the ARESM model.
Notes
Acknowledgments
This project is fully sponsored by EPSRCDHPA and Optos plc., entitled “Automatic Detection of Features in Retinal Imaging to Improve Diagnosis of Eye Diseases” (Grant Ref: EP/J50063X/1). Dr. Pasquale is supported by the Harvard Glaucoma Center of Excellence. Brian J. Song has been supported by the Harvard Vision Clinical Scientist Development Program 2K12 EY01633511.
Compliance with Ethical Standards
The authors declare that they have no conflict of interest. All the datasets used in this manuscript are publicly available datasets (RIMONE [47] and DrishtiGS datasets [48], already in the public domain). There is no issue with Ethical approval and Informed consent.
References
 1.Jonas J., Budde W., Jonas S.: Ophthalmoscopic evaluation of optic nerve head. Surv. Ophthalmol. 43: 293–320, 1999Google Scholar
 2.Tangelder G., Reus N., Lemij H.: Estimating the clinical usefulness of optic disc biometry for detecting glaucomatous change over time. Eye 20: 755–763, 2006Google Scholar
 3.Liu J., Lim J. H., Wong W. K., Li H., Wong T.Y. (2011) Automatic cup to disc ratio measurement system. http://www.faqs.org/patents/app/20110091083
 4.Haleem M. S., Han L., van Hemert J., Li B.: Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review. Comput. Med. Imaging Graph. 37: 581–596, 2013Google Scholar
 5.Mahapatra D.: Combining multiple expert annotations using semisupervised learning and graph cuts for medical image segmentation. Comput. Vis. Image Underst. 151: 114–123, 2016. Probabilistic Models for Biomedical Image AnalysisGoogle Scholar
 6.Qureshi R. J., Kovacs L., Harangi B., Nagy B., Peto T., Hajdu A.: Combining algorithms for automatic detection of optic disc and macula in fundus images. Comput. Vis. Image Underst. 116(1): 138–145, 2012Google Scholar
 7.SalazarGonzalez A., Kaba D., Li Y., Liu X.: Segmentation of the blood vessels and optic disk in retinal images. IEEE Journal of Biomedical and Health Informatics 18(6): 1874–1886, 2014Google Scholar
 8.Zhang D., Zhao Y.: Novel accurate and fast optic disc detection in retinal images with vessel distribution and directional characteristics. IEEE Journal of Biomedical and Health Informatics 20(1): 333–342, 2016Google Scholar
 9.Roychowdhury S., Koozekanani D. D., Kuchinka S. N., Parhi K. K.: Optic disc boundary and vessel origin segmentation of fundus images. IEEE Journal of Biomedical and Health Informatics 20(6): 1562–1574, 2016Google Scholar
 10.Zhao Y., Zheng Y., Liu Y., Yang J., Zhao Y., Chen D., Wang Y.: Intensity and compactness enabled saliency estimation for leakage detection in diabetic and malarial retinopathy. IEEE Trans. Med. Imaging 36(1): 51–63, 2017Google Scholar
 11.Zhao Y., Liu Y., Wu X., Harding S. P., Zheng Y.: Retinal vessel segmentation: an efficient graph cut approach with retinex and local phase. PloS one 10(4): e0122332, 2015Google Scholar
 12.Nayak J., Acharya R., Bhat P., Shetty N., Lim T. C.: Automated diagnosis of glaucoma using digital fundus images. J. Med. Syst. 33: 337–346, 2009Google Scholar
 13.Walter T., Klein J. C.: Segmentation of color fundus images of the human retina: detection of the optic disc and the vascular tree using morphological techniques.. In: Proceedings of the Second International Symposium on Medical Data Analysis, 2001, pp 282–287Google Scholar
 14.Vishnuvarthanan A., Rajasekaran M. P., Govindaraj V., Zhang Y., Thiyagarajan A.: An automated hybrid approach using clustering and nature inspired optimization technique for improved tumor and tissue segmentation in magnetic resonance brain images. Appl. Soft Comput. 57: 399–426, 2017Google Scholar
 15.Wang S., Li Y., Shao Y., Cattani C., Zhang Y., Du S.: Detection of dendritic spines using wavelet packet entropy and fuzzy support vector machine. CNS Neurol. Disord. Drug Targets (Formerly Curr. Drug Targets CNS Neurol. Disord.) 16(2): 116–121, 2017Google Scholar
 16.AbdelGhafar R., Morris T.: Progress towards automated detection and characterization of the optic disc in glaucoma and diabetic retinopathy. Inform. Health Soc. Care 32(1): 19–25, 2007Google Scholar
 17.Lalonde M., Beaulieu M., Gagnon L.: Fast and robust optic disc detection using pyramidal decomposition and hausdorffbased template matching. IEEE Trans. Med. Imaging 20: 1193–1200, 2001Google Scholar
 18.Pallawala P., Hsu W., Lee M., Eong K.: Automatic localization and contour detection of optic disc.. In: ECCV, 2004, pp 139–151Google Scholar
 19.Kass M., Witkin A., Terzopoulous D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4): 321–331, 1987Google Scholar
 20.Zhao Y., Zhao J., Yang J., Liu Y., Zhao Y., Zheng Y., Xia L., Wang Y.: Saliency driven vasculature segmentation with infinite perimeter active contour model. Neurocomputing 259: 201–209, 2017Google Scholar
 21.Sethian J.: Level set methods and fast marching methods Cambridge: Cambridge University Press, 1999Google Scholar
 22.Chan T., Vese L.: An active contour model without edges. IEEE Trans. Image Process. 10(2): 266–277, 2002Google Scholar
 23.Joshi G., Sivaswamy J., Krishnadas S.: Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Trans. Med. Imaging 30: 1192–1205, 2011Google Scholar
 24.Lowell J., Hunter A., Steel D., Basu A., Ryder R., Fletcher E., Kennedy L.: Optic nerve head segmentation. IEEE Trans. Biomed. Eng. 23: 256–264, 2004Google Scholar
 25.Xu J., Chutatape O., Sung E., Zheng C., Kuan P. C. T.: Optic disk feature extraction via modified deformable model technique for glaucoma analysis. Pattern Recogn. 40: 2063–2076, 2007Google Scholar
 26.Wong D., Liu J., Lim J., Jia X., Yin F., Li H., Wong T.: Levelset based automatic cuptodisc ratio determination using retinal fundus images in argali.. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008, pp 2266–2269Google Scholar
 27.Osareh A., Mirmehdi M., Thomas B., Markham R.: Comparison of colour spaces for optic disc localisation in retinal images.. In: Proceedings of the 16th International Conference on Pattern Recognition, 2002, pp 743–746Google Scholar
 28.Tang Y., Li X., von Freyberg A., Goch G.: Automatic segmentation of the papilla in a fundus image based on the cv model and a shape restraint.. In: Proceedings of 18th International Conference on Pattern Recognition (ICPR’06), vol 1, 2006, pp 183–186Google Scholar
 29.Cootes T., Taylor C. (2004) Statistical models of appearance for computer vision. Tech. Rep., University of ManchesterGoogle Scholar
 30.Cheng J., Liu J., Yin F., Lee B. H., Wong D. W. K., Aung T., Cheng C. Y., Wong T. Y.: Selfassessment for optic disc segmentation.. In: Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE. IEEE, 2013, pp 5861–5864Google Scholar
 31.Yin F., Liu J., Ong S. H., Sun Y., Wong D. W., Tan N. M., Cheung C., Baskaran M., Aung T., Wong T. Y.: Modelbased optic nerve head segmentation on retinal fundus images.. In: Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE. IEEE, 2011, pp 2626–2629Google Scholar
 32.Fengshou Y. (2011) Extraction of features from fundus images for glaucoma assessment. Master’s Thesis, National University of SingaporeGoogle Scholar
 33.Haleem M. S., Han L., Li B., Nisbet A., van Hemert J., Verhoek M.: Automatic extraction of optic disc boundary for detecting retinal diseases.. In: 14th IASTED International Conference on Computer Graphics and Imaging (CGIM), 2013, pp 40–47Google Scholar
 34.Li H., Chutatape O.: Boundary detection of optic disk by a modified asm method. Pattern Recogn. 36: 2093–2104, 2003Google Scholar
 35.Xu J., Chutatape O., Chew P.: Automated optic disk boundary detection by modified active contour model. IEEE Trans. Biomed. Eng. 54: 473–482, 2007Google Scholar
 36.Abràmoff M., Alward W., Greenlee E., Shuba L., Kim C., Fingert J., Kwon Y.: Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features. Investig. Ophthalmol. Vis. Sci. 48: 1665–1673 , 2007Google Scholar
 37.Anderson C. H., Bergen J. R., Burt P. J., Ogden J. M.: Pyramid methods in image processing. RCA Engineer 29: 33–41, 1984Google Scholar
 38.Daugman J.: Complete discrete 2d gabor transforms by neural networks for image analysis and compression. IEEE Trans. Acoust. Speech Signal Process. 36(7): 1169–1179, 1988Google Scholar
 39.Lindeberg T.: Edge detection and ridge detection with automatic scale selection. Int. J. Comput. Vis. 30(2): 117–154, 1998Google Scholar
 40.Haleem M. S., Han L., van Hemert J., Fleming A., Pasquale L. R., Silva P. S., Song B. J., Aiello L. P.: Regional image features model for automatic classification between normal and glaucoma in fundus and scanning laser ophthalmoscopy (slo) images. J. Med. Syst. 40(6): 132, 2016Google Scholar
 41.Serrano A. J., Soria E., Martin J. D., Magdalena R., Gomez J.: Feature selection using roc curves on classification problems.. In: The International Joint Conference on Neural Networks (IJCNN), 2010, pp 1–6Google Scholar
 42.Smola A., Vishwanathan S.: Introduction to machine learning Cambridge: Cambridge University Press, 2008Google Scholar
 43.Stegmann M., Gomez D. (2002) A brief introduction to statistical shape analysis. Tech. repGoogle Scholar
 44.Yuan X., Giritharan B., Oh J.: Gradient vector flow driven active shape for image segmentation.. In: IEEE International Conference on Multimedia and Expo, 2007, pp 2058–2061Google Scholar
 45.Lupascu C. A., Tegolo D., Trucco E.: Fabc: retinal vessel segmentation using adaboost. IEEE Trans. Inf. Technol. Biomed. 14(5): 1267–1274, 2010Google Scholar
 46.Sun J., Luan F., Wu H. (2015) Optic disc segmentation by balloon snake with texture from color fundus image. J. Biomed. Imaging 4Google Scholar
 47.Fumero F., Sigut J., Alayón S., GonzálezHernández M., González M.: Interactive tool and database for optic disc and cup segmentation of stereo and monocular retinal fundus images.. In: Short Papers Proceedings–WSCG, 2015, pp 91–97Google Scholar
 48.Sivaswamy J., Krishnadas S. R., Joshi G.D., Jain M., Tabish A.U.S.: Drishtigs: Retinal image dataset for optic nerve head (ONH) segmentation.. In: IEEE 11th International Symposium on Biomedical Imaging (ISBI). IEEE, 2014, pp 53–56Google Scholar
 49.Mary M. C. V. S., Rajsingh E. B., Jacob J. K. K., Anandhi D., Amato U., Selvan S. E.: An empirical study on optic disc segmentation using an active contour model. Biomed. Signal Process. Control 18: 19–29, 2015Google Scholar
 50.Joshi G. D., Sivaswamy J., Krishnadas S.: Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Trans. Med. Imaging 30(6): 1192–1205, 2011Google Scholar
 51.Babu T. R. G., Shenbagadevi S.: Automatic detection of glaucoma using fundus image. Eur. J. Sci. Res. 59: 22–32, 2011Google Scholar
 52.Kose C., Ikibas C.: Statistical techniques for detection of optic disc and macula and parameters measurement in retinal fundus images. Journal of Medical and Biological Engineering 31: 395–404, 2010Google Scholar
 53.DeLong E., DeLong D. M., ClarkePearson D. L.: Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 44(3): 837–845, 1988Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.