Abstract
Automatic segmentation has great potential to facilitate morphological measurements while simultaneously increasing efficiency. Nevertheless often users want to edit the segmentation to their own needs and will need different tools for this. There has been methods developed to edit segmentations of automatic methods based on the user input, primarily for binary segmentations. Here however, we present an unique training strategy for convolutional neural networks (CNNs) trained on top of an automatic method to enable interactive segmentation editing that is not limited to binary segmentation. By utilizing a robot-user during training, we closely mimic realistic use cases to achieve optimal editing performance. In addition, we show that an increase of the iterative interactions during the training process up to ten improves the segmentation editing performance substantially. Furthermore, we compare our segmentation editing CNN (interCNN) to state-of-the-art interactive segmentation algorithms and show a superior or on par performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Amrehn, M., et al.: UI-Net: Interactive artificial neural networks for iterative image segmentation based on a user model. arXiv:1709.03450 (2017)
Bloch, N., Madabhushi, A., Huisman, H., et al.: NCI-ISBI 2013 challenge: automated segmentation of prostate structures. The Cancer Imaging Archive (2015)
Criminisi, A., Sharp, T., Blake, A.: GeoS: geodesic image segmentation. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5302, pp. 99–112. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88682-2_9
van Ginneken, B., Kerkstra, S., Litjens, G., Toth, R.: PROMISE12 challenge results (2018). https://promise12.grand-challenge.org/evaluation/results/
Grady, L., Schiwietz, T., Aharon, S., Westermann, R.: Random walks for interactive organ segmentation in two and three dimensions: implementation and validation. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3750, pp. 773–780. Springer, Heidelberg (2005). https://doi.org/10.1007/11566489_95
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 (2015)
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)
Litjens, G., et al.: Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med. Image Anal. 18(2), 359–373 (2014)
Mahadevan, S., Voigtlaender, P., Leibe, B.: Iteratively trained interactive segmentation. arXiv:1805.04398 (2018)
Nickisch, H., Rother, C., Kohli, P., Rhemann, C.: Learning an interactive segmentation system. In: Indian Conference on Computer Vision, Graphics and Image Processing, pp. 274–281. ACM (2010)
Pasquier, D., Lacornerie, T., Vermandel, M., Rousseau, J., Lartigau, E., Betrouni, N., et al.: Automatic segmentation of pelvic structures from magnetic resonance images for prostate cancer radiotherapy. Int. J. Radiat. Oncol. 68(2), 592–600 (2007)
Paszke, A., et al.: Automatic differentiation in pytorch. In: NIPS-W (2017)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Rother, C., Kolmogorov, V., Blake, A.: GrabCut: interactive foreground extraction using iterated graph cuts. In: ACM Transactions on Graphics (TOG), vol. 23, pp. 309–314. ACM (2004)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Tian, Z., Liu, L., Zhang, Z., Fei, B.: PSNet: prostate segmentation on MRI based on a convolutional neural network. J. Med. Imaging 5(2), 021208 (2018)
Toth, R., et al.: Accurate prostate volume estimation using multifeature active shape models on T2-weighted MRI. Acad. Radiol. 18(6), 745–754 (2011)
Vos, P., Barentsz, J., Karssemeijer, N., Huisman, H.: Automatic computer-aided detection of prostate cancer based on multiparametric magnetic resonance image analysis. Phys. Med. Biol. 57(6), 1527 (2012)
Wang, G., Li, W., Zuluaga, M.A., Pratt, R., Patel, P.A., Aertsen, M., et al.: Interactive medical image segmentation using deep learning with image-specific fine-tuning. IEEE Trans. Med. Imaging (2018)
Wang, G., et al.: DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans. Pattern Anal. (2018)
Zhu, Q., Du, B., Turkbey, B., Choyke, P.L., Yan, P.: Deeply-supervised CNN for prostate segmentation. In: International Joint Conference on Neural Networks, pp. 178–184. IEEE (2017)
Acknowledgements
We thank the Swiss Data Science Center (project C17-04 deepMICROIA) for funding and acknowledge NVIDIA for GPU support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Bredell, G., Tanner, C., Konukoglu, E. (2018). Iterative Interaction Training for Segmentation Editing Networks. In: Shi, Y., Suk, HI., Liu, M. (eds) Machine Learning in Medical Imaging. MLMI 2018. Lecture Notes in Computer Science(), vol 11046. Springer, Cham. https://doi.org/10.1007/978-3-030-00919-9_42
Download citation
DOI: https://doi.org/10.1007/978-3-030-00919-9_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00918-2
Online ISBN: 978-3-030-00919-9
eBook Packages: Computer ScienceComputer Science (R0)