Skip to main content

Fully Automated Convolutional Neural Network Method for Quantification of Breast MRI Fibroglandular Tissue and Background Parenchymal Enhancement

Abstract

The aim of this study is to develop a fully automated convolutional neural network (CNN) method for quantification of breast MRI fibroglandular tissue (FGT) and background parenchymal enhancement (BPE). An institutional review board-approved retrospective study evaluated 1114 breast volumes in 137 patients using T1 precontrast, T1 postcontrast, and T1 subtraction images. First, using our previously published method of quantification, we manually segmented and calculated the amount of FGT and BPE to establish ground truth parameters. Then, a novel 3D CNN modified from the standard 2D U-Net architecture was developed and implemented for voxel-wise prediction whole breast and FGT margins. In the collapsing arm of the network, a series of 3D convolutional filters of size 3 × 3 × 3 are applied for standard CNN hierarchical feature extraction. To reduce feature map dimensionality, a 3 × 3 × 3 convolutional filter with stride 2 in all directions is applied; a total of 4 such operations are used. In the expanding arm of the network, a series of convolutional transpose filters of size 3 × 3 × 3 are used to up-sample each intermediate layer. To synthesize features at multiple resolutions, connections are introduced between the collapsing and expanding arms of the network. L2 regularization was implemented to prevent over-fitting. Cases were separated into training (80%) and test sets (20%). Fivefold cross-validation was performed. Software code was written in Python using the TensorFlow module on a Linux workstation with NVIDIA GTX Titan X GPU. In the test set, the fully automated CNN method for quantifying the amount of FGT yielded accuracy of 0.813 (cross-validation Dice score coefficient) and Pearson correlation of 0.975. For quantifying the amount of BPE, the CNN method yielded accuracy of 0.829 and Pearson correlation of 0.955. Our CNN network was able to quantify FGT and BPE within an average of 0.42 s per MRI case. A fully automated CNN method can be utilized to quantify MRI FGT and BPE. Larger dataset will likely improve our model.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. DeSantis C, Ma J, Sauer AG et al.: Breast cancer statistics, 2017, racial disparity in mortality by state. CA Cancer J Clin 67(6):439–448, 2017

    Article  Google Scholar 

  2. Pal T, Permuth-Wey J, Betts JA, Krischer JP, Fiorica J, Arango H, LaPolla J, Hoffman M, Martino MA, Wakeley K, Wilbanks G, Nicosia S, Cantor A, Sutphen R: BRCA1 and BRCA2 mutations account for a large proportion of ovarian carcinoma cases. Cancer 104(12):2807–2816, 2005

    Article  CAS  PubMed  Google Scholar 

  3. Burke W, Daly M, Garber J, Botkin J, Kahn MJ, Lynch P, McTiernan A, Offit K, Perlman J, Petersen G, Thomson E, Varricchio C: Recommendations for follow-up care of individuals with an inherited predisposition to cancer. II. BRCA1 and BRCA2. Cancer genetics studies consortium. JAMA 277(12):997–1003, 1997

    Article  CAS  PubMed  Google Scholar 

  4. Schairer C, Lubin J, Troisi R, Sturgeon S, Brinton L, Hoover R: Menopausal estrogen and estrogen-progestin replacement therapy and breast cancer risk. JAMA 283(4):485–491, 2000

    Article  CAS  PubMed  Google Scholar 

  5. Byrne C, Schairer C, Brinton LA, Wolfe J, Parekh N, Salane M, Carter C, Hoover R: Effects of mammographic density and benign breast disease on breast cancer risk (United States). Cancer Causes Control 12(2):103–110, 2001

    Article  CAS  PubMed  Google Scholar 

  6. McCormack VA, dos Santos SI: Breast density and parenchymal patterns as markers of breast cancer risk: a meta-analysis. Cancer Epidemiol Biomark Prev 15(6):1159–1169, 2006

    Article  Google Scholar 

  7. Boyd N, Martin L, Gunasekara A, Melnichouk O, Maudsley G, Peressotti C, Yaffe M, Minkin S: Mammographic density and breast cancer risk: evaluation of a novel method of measuring breast tissue volumes. Cancer Epidemiol Biomarkers Prev 18(6):1754–1762, 2009

    Article  PubMed  Google Scholar 

  8. American College of Radiology: Breast imaging reporting and data system (BI-RADS), 5th edition. Reston: American College of Radiology, 2013

    Google Scholar 

  9. King V, Brooks JD, Bernstein JL, Reiner AS, Pike MC, Morris EA: Background parenchymal enhancement at breast MR imaging and breast cancer risk. Radiology 260(1):50–60, 2011

    Article  PubMed  Google Scholar 

  10. Dontchos BN, Rahbar H, Partridge SC, Korde LA, Lam DL, Scheel JR, Peacock S, Lehman CD: Are qualitative assessments of background parenchymal enhancement, amount of fibroglandular tissue on MR images, and mammographic density associated with breast cancer risk? Radiology 276(2):371–380, 2015

    Article  PubMed  PubMed Central  Google Scholar 

  11. Melsaether A, McDermott M, Gupta D, Pysarenko K, Shaylor SD, Moy L: Inter- and intrareader agreement for categorization of background parenchymal enhancement at baseline and after training. AJR Am J Roentgenol 203(1):209–215, 2014

    Article  PubMed  Google Scholar 

  12. Ha R, Mema E, Guo X, Mango V, Desperito E, Ha J, Wynn R, Zhao B: Quantitative 3D breast magnetic resonance imaging fibroglandular tissue analysis and correlation with qualitative assessments: a feasibility study. Quant Imaging Med Surg 6(2):144–150, 2016

    Article  PubMed  PubMed Central  Google Scholar 

  13. Ha R, Mema E, Guo X, Mango V, Desperito E, Ha J, Wynn R, Zhao B: Three-dimensional quantitative validation of breast magnetic resonance imaging background parenchymal enhancement assessments. Curr Probl Diagn Radiol 45(5):297–303, 2016

    Article  PubMed  Google Scholar 

  14. Mema E, Mango V, Guo X et al.: Does breast MRI background parenchymal enhancement indicate metabolic activity? Qualitative and 3D quantitative computer imaging analysis. J Magn Reson Imaging 47(3):753–759, 2018

    Article  PubMed  Google Scholar 

  15. Clendenen TV, Zeleniuch-Jacquotte A, Moy L, Pike MC, Rusinek H, Kim S: Comparison of 3-point Dixon imaging and fuzzy C-means clustering methods for breast density measurement. J Magn Reson Imaging 38(2):474–481, 2013

    Article  PubMed  Google Scholar 

  16. Eyal E, Badikhi D, Furman-Haran E, Kelcz F, Kirshenbaum KJ, Degani H: Principal component analysis of breast DCE-MRI adjustedwith a model-based method. J Magn Reson Imaging 30(5):989–998, 2009

    Article  PubMed  PubMed Central  Google Scholar 

  17. LeChun Y, Bengio T, Hinton G: Deep learning. Nature 521:436–444, 2015

    Article  CAS  Google Scholar 

  18. Ronneberger O, Fischer P, Brox T: U-Net: convolutional networks for biomedical image segmentation. Medical image computing and computer-assisted intervention (MICCAI), springer. LNCS 9351:234–241, 2015

    Google Scholar 

  19. Çiçek O, Abdulkadir A, Lienkamp S et al.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. Medical image computing and computer-assisted intervention (MICCAI), springer. LNCS 9901:424–432, Oct 2016

    Google Scholar 

  20. He K, Zhang X, Ren S et al.: Deep residual learning for image recognition. ArxivOrg [Internet]. 7(3):171–180, 2015. https://doi.org/10.1007/978-3-319-10590-1_53%5Cn http://arxiv.org/abs/1311.2901%5Cnpapers3://publication/uuid/44feb4b1-873a-4443-8baa-1730ecd16291

  21. Springenberg JT, Dosovitskiy A, Brox T, et al. Striving for simplicity: the all convolutional net. 2014 Dec 21 [cited 2017 Jul 21]; Available from: http://arxiv.org/abs/1412.6806

  22. He K, Zhang X, Ren S, et al. “Delving deep into rectifiers: surpassing human-level performance on ImageNet classification,” arXiv:1502.01852, (2015).

  23. Kingma DP, Ba J, Adam: A method for stochastic optimization. arXiv:1412.6980 [cs.LG], December 2014.

  24. Gubern-Mérida A, Kallenberg M, Mann RM et al.: Breast segmentation and density estimation in breast MRI: a fully automatic framework. IEEE J Biomed Health Inform 19(1):349–357, 2015

    Article  PubMed  Google Scholar 

  25. Tustison NJ, Avants BB, Cook PA, Yuanjie Zheng, Egan A, Yushkevich PA, Gee JC: N4ITK: improved N3 bias correction. IEEE Trans Med Imaging. 29(6):1310–1320, 2010 Jun

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Richard Ha.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ha, R., Chang, P., Mema, E. et al. Fully Automated Convolutional Neural Network Method for Quantification of Breast MRI Fibroglandular Tissue and Background Parenchymal Enhancement. J Digit Imaging 32, 141–147 (2019). https://doi.org/10.1007/s10278-018-0114-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-018-0114-7

Keywords

  • Convolutional neural network
  • Breast cancer
  • MRI