Advertisement

First Trimester Gaze Pattern Estimation Using Stochastic Augmentation Policy Search for Single Frame Saliency Prediction

Conference paper
  • 155 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)

Abstract

While performing an ultrasound (US) scan, sonographers direct their gaze at regions of interest to verify that the correct plane is acquired and to interpret the acquisition frame. Predicting sonographer gaze on US videos is useful for identification of spatio-temporal patterns that are important for US scanning. This paper investigates utilizing sonographer gaze, in the form of gaze-tracking data, in a multi-modal imaging deep learning framework to assist the analysis of the first trimester fetal ultrasound scan. Specifically, we propose an encoder-decoder convolutional neural network with skip connections to predict the visual gaze for each frame using 115 first trimester ultrasound videos; 29,250 video frames for training, 7,290 for validation and 9,126 for testing. We find that the dataset of our size benefits from automated data augmentation, which in turn, alleviates model overfitting and reduces structural variation imbalance of US anatomical views between the training and test datasets. Specifically, we employ a stochastic augmentation policy search method to improve segmentation performance. Using the learnt policies, our models outperform the baseline: KLD, SIM, NSS and CC (2.16, 0.27, 4.34 and 0.39 versus 3.17, 0.21, 2.92 and 0.28).

Keywords

Fetal ultrasound First trimester Gaze tracking Single frame saliency prediction U-Net Data augmentation 

Notes

Acknowledgements

This work is supported by the ERC (ERC-ADG-2015694581, project PULSE) and the EPSRC (EP/R013853/1 and EP/T028572/1). AP is funded by the NIHR Oxford Biomedical Research Centre.

References

  1. 1.
    Al, W.A., Yun, I.D.: Reinforcing medical image classifier to improve generalization on small datasets (2019)Google Scholar
  2. 2.
    Baumgartner, C.F., et al.: Sononet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE Trans. Med. Imaging 36(11), 2204–2215 (2017)CrossRefGoogle Scholar
  3. 3.
    Borji, A.: Saliency prediction in the deep learning era: An empirical investigation. arXiv preprint arXiv:1810.03716 10 (2018)
  4. 4.
    Bylinskii, Z., et al.: Mit saliency benchmark (2015)Google Scholar
  5. 5.
    Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., Durand, F.: What do different evaluation metrics tell us about saliency models? IEEE Trans. Pattern Anal. Mach. Intell. 41(3), 740–757 (2018)CrossRefGoogle Scholar
  6. 6.
    Cai, Y., Sharma, H., Chatelain, P., Noble, J.A.: Sonoeyenet: standardized fetal ultrasound plane detection informed by eye tracking. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1475–1478. IEEE (2018)Google Scholar
  7. 7.
    Chang, M.M.L., Ong, S.K., Nee, A.Y.C.: Automatic information positioning scheme in AR-assisted maintenance based on visual saliency. In: De Paolis, L.T., Mongelli, A. (eds.) AVR 2016. LNCS, vol. 9768, pp. 453–462. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40621-3_33CrossRefGoogle Scholar
  8. 8.
    Chatelain, P., Sharma, H., Drukker, L., Papageorghiou, A.T., Noble, J.A.: Evaluation of gaze tracking calibration for longitudinal biomedical imaging studies. IEEE Trans. Cybern. 50(1), 153–163 (2018)CrossRefGoogle Scholar
  9. 9.
    Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)Google Scholar
  10. 10.
    Droste, R., et al.: Ultrasound image representation learning by modeling sonographer visual attention. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 592–604. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-20351-1_46CrossRefGoogle Scholar
  11. 11.
    Droste, R., Cai, Y., Sharma, H., Chatelain, P., Papageorghiou, A.T., Noble, J.A.: Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction. In: Zheng, Y., Williams, B.M., Chen, K. (eds.) MIUA 2019. CCIS, vol. 1065, pp. 174–186. Springer, Cham (2020).  https://doi.org/10.1007/978-3-030-39343-4_15CrossRefGoogle Scholar
  12. 12.
    Eaton-Rosen, Z., Bragman, F., Ourselin, S., Cardoso, M.J.: Improving data augmentation for medical image segmentation (2018)Google Scholar
  13. 13.
    Hu, J., et al.: S-unet: a bridge-style u-net framework with a saliency mechanism for retinal vessel segmentation. IEEE Access 7, 174167–174177 (2019)CrossRefGoogle Scholar
  14. 14.
    Itti, L.: Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Trans. Image Process. 13(10), 1304–1318 (2004)CrossRefGoogle Scholar
  15. 15.
    Lee, L.H., Gao, Y., Noble, J.A.: Principled ultrasound data augmentation for classification of standard planes (2021)Google Scholar
  16. 16.
    Ryou, H., Yaqub, M., Cavallaro, A., Papageorghiou, A.T., Noble, J.A.: Automated 3d ultrasound image analysis for first trimester assessment of fetal health. Phys. Med. Biol. 64(18), 185010 (2019)Google Scholar
  17. 17.
    Setlur, V., Takagi, S., Raskar, R., Gleicher, M., Gooch, B.: Automatic image retargeting. In: Proceedings of the 4th International Conference on Mobile and Ubiquitous Multimedia, pp. 59–68 (2005)Google Scholar
  18. 18.
    Summers, C., Dinneen, M.J.: Improved mixed-example data augmentation. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1262–1270. IEEE (2019)Google Scholar
  19. 19.
    Varley, J.: Persistence of Vision. Penguin (1988)Google Scholar
  20. 20.
    Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
  21. 21.
    Zhang, Z., Xu, Y., Yu, J., Gao, S.: Saliency detection in 360 videos. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 488–503 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Department of Engineering ScienceUniversity of OxfordOxfordUK
  2. 2.Nuffield Department of Women’s and Reproductive HealthUniversity of OxfordOxfordUK

Personalised recommendations