Boosting Convolutional Filters with Entropy Sampling for Optic Cup and Disc Image Segmentation from Fundus Images
We propose a novel convolutional neural network (CNN) based method for optic cup and disc segmentation. To reduce computational complexity, an entropy based sampling technique is introduced that gives superior results over uniform sampling. Filters are learned over several layers with the output of previous layers serving as the input to the next layer. A softmax logistic regression classifier is subsequently trained on the output of all learned filters. In several error metrics, the proposed algorithm outperforms existing methods on the public DRISHTI-GS data set.
Unable to display preview. Download preview PDF.
- 3.Brown, A., Hinton, G.: Products of hidden markov models. Technical report GCNU TR 2000–008, Gatsby Computational Neuroscience Unit, University College London, November 2000Google Scholar
- 5.Ciresan, D., Giusti, A., Gambardella, L.M., Schmidhuber, J.: Deep neural networks segment neuronal membranes in electron microscopy images. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 2843–2851. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4741-deep-neural-networks-segment-neuronal-membranes-in-electron-microscopy-images.pdf
- 7.Freund, Y., Schapire, R.E.: A short introduction to boosting (1999)Google Scholar
- 8.Gonzalez, R.C., Woods, R.E., Eddins, S.L.: Digital Image Processing Using MATLAB. Prentice-Hall Inc., Upper Saddle River (2003)Google Scholar
- 9.Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.1., March 2014. http://cvxr.com/cvx
- 11.Kiros, R., Popuri, K., Cobzas, D., Jagersand, M.: Stacked multiscale feature learning for domain independent medical image segmentation. In: Wu, G., Zhang, D., Zhou, L. (eds.) MLMI 2014. LNCS, vol. 8679, pp. 25–32. Springer, Heidelberg (2014). http://dx.doi.org/10.1007/978-3-319-10581-9_4Google Scholar
- 13.Organization, W.H.: Vision 2020. the right to sight. global initiative for the elimination of avoidable blindness. WHO Press (2006)Google Scholar
- 15.Scherer, D., Müller, A., Behnke, S.: Evaluation of pooling operations in convolutional architectures for object recognition. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010, Part III. LNCS, vol. 6354, pp. 92–101. Springer, Heidelberg (2010). http://dl.acm.org/citation.cfm?id=1886436.1886447CrossRefGoogle Scholar
- 16.Sivaswamy, J., Krishnadas, S., Datt Joshi, G., Jain, M., Syed Tabish, A.: Drishti-GS: retinal image dataset for optic nerve head(ONH) segmentation. In: 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), pp. 53–56, April 2014Google Scholar
- 17.Turaga, S.C., Murray, J.F., Jain, V., Roth, F., Helmstaedter, M., Briggman, K., Denk, W., Seung, H.S.: Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation 22(2), 511–538, May 17, 2015 2009. http://dx.doi.org/10.1162/neco.2009.10-08-881CrossRefGoogle Scholar
- 18.Wong, D., et al.: Level set based automatic cup to disc ratio determination using retinal fundus images in argali. In: Proc. IEEE EMBC, pp. 2266–2269 (2008)Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (http://creativecommons.org/licenses/by-nc/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.