Automatic Re-orientation of 3D Echocardiographic Images in Virtual Reality Using Deep Learning

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)


In 3D echocardiography (3D echo), the image orientation varies depending on the position and direction of the transducer during examination. As a result, when reviewing images the user must initially identify anatomical landmarks to understand image orientation – a potentially challenging and time-consuming task. We automated this initial step by training a deep residual neural network (ResNet) to predict the rotation required to re-orient an image to the standard apical four-chamber view). Three data pre-processing strategies were explored: 2D, 2.5D and 3D. Three different loss function strategies were investigated: classification of discrete integer angles, regression with mean absolute angle error loss, and regression with geodesic loss. We then integrated the model into a virtual reality application and aligned the re-oriented 3D echo images with a standard anatomical heart model. The deep learning strategy with the highest accuracy – 2.5D classification of discrete integer angles – achieved a mean absolute angle error on the test set of 9.0\(^\circ \). This work demonstrates the potential of artificial intelligence to support visualisation and interaction in virtual reality.


3D echocardiography Deep learning Virtual reality 


  1. 1.
    Balduzzi, D., et al.: The shattered gradients problem: if resnets are the answer, then what is the question? In: International Conference on Machine Learning. PMLR, pp. 342–350 (2017)Google Scholar
  2. 2.
    Cheng, K., et al.: 3D echocardiography: benefits and steps to wider implementation. Br. J. Cardiol. 25, 63–68 (2018)Google Scholar
  3. 3.
    Dolk, H., et al.: Congenital heart defects in Europe: prevalence and perinatal mortality, 2000 to 2005. Circulation 123(8), 841–849 (2011)CrossRefGoogle Scholar
  4. 4.
    Gidaris, S., Singh, P., Komodakis, N.: Unsupervised rep- resentation learning by predicting image rotations. In: arXiv preprint arXiv:1803.07728 (2018)
  5. 5.
    Gomez, A.: MIPROT: A Medical Image Processing Toolbox for MAT-LAB. In: arXiv preprint arXiv:2104.04771 (2021)
  6. 6.
    He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  7. 7.
  8. 8.
    Lau, I., Sun, Z.: Three-dimensional printing in congenital heart disease: a systematic review. J. Med. Radiat. Sci. 65(3), 226–236 (2018)CrossRefGoogle Scholar
  9. 9.
    Leclerc, S., et al.: Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans. Med. Imaging 38(9), 2198–2210 (2019)CrossRefGoogle Scholar
  10. 10.
    Mahendran, S., Ali, H., Vidal, R.: 3D pose regression using convolutional neural networks. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2174–2182 (2017)Google Scholar
  11. 11.
    Pushparajah, K., et al.: Virtual reality three dimensional echocardiographic imaging for planning surgical atrioventricular valve repair. JTCVS Tech. 7, 269–277 (2021)CrossRefGoogle Scholar
  12. 12.
    Saez, D.: Correcting Image Orientation Using Convolutional Neural Networks.
  13. 13.
    Simpson, J., et al.: Image orientation for three-dimensional echocardiography of congenital heart disease. The Int. J. Cardiovasc. Imaging 28(4), 743–753 (2012)CrossRefGoogle Scholar
  14. 14.
    Simpson, J.M., Miller, O.: Three-dimensional echocardiography in congenital heart disease. Arch. Cardiovasc. Dis. 104(1), 45–56 (2011)CrossRefGoogle Scholar
  15. 15.
    Valverde, I., et al.: Three-dimensional printed models for surgical planning of complex congenital heart defects: an international multicentre study. Eur. J. Cardiothorac. Surg. 52(6), 1139–1148 (2017)CrossRefGoogle Scholar
  16. 16.
    Wheeler, G., et al.: Virtual interaction and visualisation of 3D medical imaging data with VTK and unity. Healthc. Technol. Lett. 5(5), 148–153 (2018)CrossRefGoogle Scholar
  17. 17.
    Wright, R., et al.: LSTM spatial co-transformer networks for registration of 3D fetal US and MR brain images. In: Melbourne, A., et al. (eds.) PIPPI/DATRA -2018. LNCS, vol. 11076, pp. 149–159. Springer, Cham (2018). Scholar
  18. 18.
    Jia-Jun, X., et al.: Patient-specific three-dimensional printed heart models benefit preoperative planning for complex congenital heart disease. World J. Pediatr. 15(3), 246–254 (2019)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.School of Biomedical Engineering and Imaging SciencesKing’s College LondonLondonUK
  2. 2.Department of Congenital Heart DiseaseEvelina London Children’s Hospital, Guy’s and St Thomas’ National Health Service Foundation TrustLondonUK

Personalised recommendations