Automatic Fetal Ultrasound Standard Plane Detection Using Knowledge Transferred Recurrent Neural Networks
Accurate acquisition of fetal ultrasound (US) standard planes is one of the most crucial steps in obstetric diagnosis. The conventional way of standard plane acquisition requires a thorough knowledge of fetal anatomy and intensive manual labors. Hence, automatic approaches are highly demanded in clinical practice. However, automatic detection of standard planes containing key anatomical structures from US videos remains a challenging problem due to the high intra-class variations of standard planes. Unlike previous studies that developed specific methods for different anatomical standard planes respectively, we present a general framework to detect standard planes from US videos automatically. Instead of utilizing hand-crafted visual features, our framework explores spatio-temporal feature learning with a novel knowledge transferred recurrent neural network (T-RNN), which incorporates a deep hierarchical visual feature extractor and a temporal sequence learning model. In order to extract visual features effectively, we propose a joint learning framework with knowledge transfer across multi-tasks to address the insufficiency issue of limited training data. Extensive experiments on different US standard planes with hundreds of videos corroborate that our method can achieve promising results, which outperform state-of-the-art methods.
KeywordsRight Ventricle Knowledge Transfer Recurrent Neural Network Right Atrium Convolutional Neural Network
Unable to display preview. Download preview PDF.
- 1.Chen, H., Ni, D., Yang, X., Li, S., Heng, P.A.: Fetal abdominal standard plane localization through representation learning with knowledge transfer. In: Wu, G., Zhang, D., Zhou, L. (eds.) MLMI 2014. LNCS, vol. 8679, pp. 125–132. Springer, Heidelberg (2014)Google Scholar
- 3.Maraci, M.A., Napolitano, R., Papageorghiou, A., Noble, J.A.: Searching for structures of interest in an ultrasound video sequence. In: Wu, G., Zhang, D., Zhou, L. (eds.) MLMI 2014. LNCS, vol. 8679, pp. 133–140. Springer, Heidelberg (2014)Google Scholar
- 4.Rahmatullah, B., Papageorghiou, A.T., Noble, J.A.: Integration of local and global features for anatomical object detection in ultrasound. In: Ayache, N., Delingette, H., Golland, P., Mori, K. (eds.) MICCAI 2012, Part III. LNCS, vol. 7512, pp. 402–409. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 6.Chen, H., Ni, D., Qin, J., Li, S., Yang, X., Wang, T., Heng, P.: Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE Journal of Biomedical and Health Informatics (2015)Google Scholar
- 7.Graves, A.: Supervised Sequence Labell. with Recur. Neur. Networks. SCI, vol. 385, pp. 5–13. Springer, Heidelberg (2012)Google Scholar
- 8.Donahue, J., Hendricks, L.A., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. arXiv preprint:1411.4389 (2014)Google Scholar
- 9.Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Back-Propagation: Theory, Architectures and Applications, pp. 433–486 (1995)Google Scholar