Targeting Precision with Data Augmented Samples in Deep Learning

  • Pietro NardelliEmail author
  • Raúl San José Estépar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)


In the last five years, deep learning (DL) has become the state-of-the-art tool for solving various tasks in medical image analysis. Among the different methods that have been proposed to improve the performance of Convolutional Neural Networks (CNNs), one typical approach is the augmentation of the training data set through various transformations of the input image. Data augmentation is typically used in cases where a small amount of data is available, such as the majority of medical imaging problems, to present a more substantial amount of data to the network and improve the overall accuracy. However, the ability of the network to improve the accuracy of the results when a slightly modified version of the same input is presented is often overestimated. This overestimation is the result of the strong correlation between data samples when they are considered independently in the training phase. In this paper, we emphasize the importance of optimizing for accuracy as well as precision among multiple replicates of the same training data in the context of data augmentation. To this end, we propose a new approach that leverages the augmented data to help the network focus on the precision through a specifically-designed loss function, with the ultimate goal to improve both the overall performance and the network’s precision at the same time. We present two different applications of DL (regression and segmentation) to demonstrate the strength of the proposed strategy. We think that this work will pave the way to a explicit use of data augmentation within the loss function that helps the network to be invariant to small variations of the same input samples, a characteristic that is always required to every application in the medical imaging field.


Deep learning Data augmentation Accuracy Precision 


  1. 1.
    Antoniou, A., Storkey, A., Edwards, H.: Data augmentation generative adversarial networks. arXiv preprint arXiv:1711.04340 (2017)
  2. 2.
    Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: Autoaugment: learning augmentation policies from data. arXiv preprint arXiv:1805.09501 (2018)
  3. 3.
    Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 321, 321–331 (2018)CrossRefGoogle Scholar
  4. 4.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  5. 5.
    McCann, M.T., et al.: Convolutional neural networks for inverse problems in imaging: a review. IEEE SPM 34(6), 85–95 (2017)CrossRefGoogle Scholar
  6. 6.
    Nardelli, P., et al.: Accurate measurement of airway morphology on chest CT images. In: Stoyanov, D., et al. (eds.) RAMBO/BIA/TIA -2018. LNCS, vol. 11040, pp. 335–347. Springer, Cham (2018). Scholar
  7. 7.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  8. 8.
    Ross, J.C., et al.: Lung extraction, lobe segmentation and hierarchical region assessment for quantitative analysis on high resolution computed tomography images. In: Yang, G.-Z., Hawkes, D., Rueckert, D., Noble, A., Taylor, C. (eds.) MICCAI 2009. LNCS, vol. 5762, pp. 690–698. Springer, Heidelberg (2009). Scholar
  9. 9.
    Suzuki, K.: Overview of deep learning in medical imaging. Radiol. Phys. Technol. 10(3), 257–273 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Applied Chest Imaging Laboratory, Brigham and Women’s HospitalHarvard Medical SchoolBostonUSA

Personalised recommendations