Skip to main content
Log in

Preliminary Results: Comparison of Convolutional Neural Network Architectures as an Auxiliary Clinical Tool Applied to Screening Mammography in Mexican Women

  • Original Article
  • Published:
Journal of Medical and Biological Engineering Aims and scope Submit manuscript

Abstract

Purpose

Mammography is the modality of choice for the early detection of breast cancer. Deep learning, using convolutional neural networks (CNNs) specifically, have achieved extraordinary results in the classification of diseases, including breast cancer, on imaging. The images used to train a CNN varies based on several factors, such as imaging technique, imaging equipment, and study population; these factors significantly affect the accuracy of the CNN models. The aim of this study was to develop a novel CNN for the classification of mammograms as benign or malignant and to compare its utility to that of popular pre-trained CNNs in the literature using transfer learning. All CNNs were trained to detect breast cancer on mammograms using mammograms from a created database of Mexican women (MAMMOMX-PABIOM) and from a public database of UK women (MIAS).

Methods

A database (MAMMOMX-PABIOM) was built comprising 1,070 mammography images of 235 Mexican patients from 4 hospitals in Mexico. The study also used mammographic images from the Mammographic Image Analysis Society (MIAS) public database, which comprises mammography images from the UK National Breast Screening Programme. A novel CNN was developed and trained based on different configurations of training data; the accuracy of the models resulting from the novel CNN were compared with models resulting from more advanced pre-trained CNNs (DenseNet121, MobileNetV2, ResNet 50, VGG16) which were built using transfer learning.

Results

Of the models resulting from pre-trained CNNs using transfer learning, the model based on MobileNetV2 and training data from the MAMMOMX-PABIOM database achieved the highest validation accuracy of 70.10%. In comparison, the novel CNN, when trained with the data configuration A6, which comprises data from both the MAMMOMX-PABIOM database and the MIAS database, produced a much higher accuracy of 99.14%.

Conclusion

Although transfer learning is a widely used technique when training, data is scarce. The novel CNN produced much higher accuracy values across all configurations of training data compared to the accuracy values of pre-trained CNNs using transfer learning. In addition, this study addresses the gap in that neither a national database of mammograms of Mexican women exists, nor a deep learning tool for the classification of mammograms as benign or malignant that is focused on this population.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Sung, H., & Global Cancer Statistics. (2020). : GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries, CA: A Cancer Journal for Clinicians, vol. 71, no. 3, pp. 209–249, 2021, https://doi.org/10.3322/caac.21660.

  2. Lei, S., et al. (2021). Global patterns of breast cancer incidence and mortality: A population-based cancer registry data analysis from 2000 to 2020. Cancer Communications, 41(11), 1183–1194. https://doi.org/10.1002/cac2.12207.

    Article  PubMed  PubMed Central  Google Scholar 

  3. INEGI Estadísticas a propósito del día mundial de la lucha contra el cáncer de mama. Oct. 18, 2021. Accessed: Sep. 25, 2023. [Online]. Available: https://www.inegi.org.mx/contenidos/saladeprensa/aproposito/2021/EAP_LUCHACANCER2021.pdf.

  4. Secretaria de, & Salud Programa de Acción Específico Prevención y Control del Cáncer de la Mujer 2013–2018, gob.mx. Accessed: Sep. 30, 2023. [Online]. Available: http://www.gob.mx/salud/documentos/programa-de-accion-especifico-prevencion-y-control-del-cancer-de-la-mujer-2013-2018.

  5. Gutiérrez, G. L. V., García, A. M., Benavente, E. P. L., Mejía, F. H., Gómez, R. T., & Villalón, F. R. (Oct. 2023). Non–timely referral of women aged 40 to 69 to preventive medicine for breast cancer detection and its association with the BI-RADS classification. Preventive Medicine Reports, 35, 102369. https://doi.org/10.1016/j.pmedr.2023.102369.

  6. Ramos Herrera, I. M., et al. (Jul. 2022). Public Policies and Programs for the Prevention and Control of Breast Cancer in latin American women: Scoping review. JMIR Cancer, 8(3), e32370. https://doi.org/10.2196/32370.

  7. Coleman, C. (May 2017). Early detection and screening for breast Cancer. Seminars in Oncology Nursing, 33(2), 141–155. https://doi.org/10.1016/j.soncn.2017.02.009.

  8. Leithner, D., et al. (Aug. 2018). Clinical role of breast MRI now and going forward. Clinical Radiology, 73(8), 700–714. https://doi.org/10.1016/j.crad.2017.10.021.

  9. Sutton, E. J., et al. (2016). Breast cancer molecular subtype classifier that incorporates MRI features. Journal of Magnetic Resonance Imaging, 44(1), 122–129. https://doi.org/10.1002/jmri.25119.

    Article  PubMed  Google Scholar 

  10. Ma, W., et al. (2019). Breast Cancer Molecular Subtype Prediction by Mammographic Radiomic features. Academic Radiology, 26(2), 196–201. https://doi.org/10.1016/j.acra.2018.01.023.

    Article  PubMed  Google Scholar 

  11. Mintz, Y., & Brodie, R. (2019). Introduction to artificial intelligence in medicine. Minimally Invasive Therapy & Allied Technologies, 28, 73–81. https://doi.org/10.1080/13645706.2019.1575882.

    Article  Google Scholar 

  12. Akiba, T., Suzuki, S., & Fukuda, K. (2017). Extremely large Minibatch SGD: Training ResNet-50 on ImageNet in 15 minutes. Nov. https://doi.org/10.48550/arXiv.1711.04325.

    Article  Google Scholar 

  13. Russakovsky, O., et al. (2015). ImageNet large scale visual recognition challenge. Jan. https://doi.org/10.48550/arXiv.1409.0575.

    Article  Google Scholar 

  14. REDCap. Accessed Oct. 04, 2023. [Online]. Available: https://www.project-redcap.org/.

  15. Orsi, C. D., Sickles, E., Mendelson, E., & Morris, E. (2013). 2013 ACR BI-RADS Atlas: Breast imaging reporting and Data System. American College of Radiology.

  16. MIAS Mammography. Accessed: Sep. 30 (2023). [Online]. Available: https://www.kaggle.com/datasets/kmader/mias-mammography.

  17. Maeda, V., Gutiérrez (2020). Jan., Comparison of Convolutional Neural Network Architectures for Classification of Tomato Plant Diseases, Applied Sciences, vol. 10, no. 4, Art. no. 4, https://doi.org/10.3390/app10041245.

  18. Welcome to Python.org, Python.org. Accessed: Sep. 22, 2021. [Online]. Available: https://www.python.org/.

  19. Anaconda | The World’s Most Popular Data Science Platform Accessed: Sep. 22, 2021. [Online]. Available: https://www.anaconda.com/.

  20. Home — Spyder IDE Accessed: Sep. 22, 2021. [Online]. Available: https://www.spyder-ide.org/.

  21. pandas - Python Data Analysis Library Accessed: Dec. 07, 2021. [Online]. Available: https://pandas.pydata.org/.

  22. Pillow Accessed: Dec. 07, 2021. [Online]. Available: https://pillow.readthedocs.io/en/stable/index.html.

  23. Matplotlib — Visualization with Python Accessed: Dec. 07, 2021. [Online]. Available: https://matplotlib.org/.

  24. tqdm Fast, Extensible Progress Meter. Accessed: Dec. 07, 2021. [MacOS, MacOS:: MacOS X, Microsoft, Microsoft :: MS-DOS, Microsoft :: Windows, POSIX, POSIX :: BSD, POSIX :: BSD :: FreeBSD, POSIX :: Linux, POSIX :: SunOS/Solaris, Unix]. Available: https://tqdm.github.io.

  25. opencv-contrib-python Wrapper package for OpenCV python bindings. Accessed: Dec. 07, 2021. [MacOS, Microsoft:: Windows, POSIX, Unix]. Available: https://github.com/skvark/opencv-python.

  26. scikit-learn machine learning in Python — scikit-learn 1.0.1 documentation. Accessed: Dec. 07, 2021. [Online]. Available: https://scikit-learn.org/stable/.

  27. TensorFlow. Accessed Oct. 04, 2023. [Online]. Available: https://www.tensorflow.org/?hl=es-419.

  28. Keras Deep Learning for humans. Accessed: Oct. 04, 2023. [Online]. Available: https://keras.io/.

  29. Dwork, C., Feldman, V., Hardt, M., Pitassi, T., Reingold, O., & Roth, A. (2015). The reusable holdout: Preserving validity in adaptive data analysis, Science, vol. 349, no. 6248, pp. 636–638, Aug. https://doi.org/10.1126/science.aaa9375.

  30. Pan, S. J., & Yang, Q. (2010). A Survey on Transfer Learning, IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, Oct. https://doi.org/10.1109/TKDE.2009.191.

  31. Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2018). Densely Connected Convolutional Networks Jan. doi: https://doi.org/10.48550/arXiv.1608.06993.

    Article  Google Scholar 

  32. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2019). MobileNetV2: Inverted residuals and Linear bottlenecks. Mar. https://doi.org/10.48550/arXiv.1801.04381.

    Article  Google Scholar 

  33. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition. Dec. https://doi.org/10.48550/arXiv.1512.03385.

    Article  Google Scholar 

  34. Suckling, J. (2015). Mammographic Image Analysis Society (MIAS) database v1.21. Aug. 28, doi: 10250394.

  35. Abdelhafiz, D., Yang, C., Ammar, R., & Nabavi, S. (Jun. 2019). Deep convolutional neural networks for mammography: Advances, challenges and applications. Bmc Bioinformatics, 20(11), 281. https://doi.org/10.1186/s12859-019-2823-4.

  36. Moreira, I. C., Amaral, I., Domingues, I., Cardoso, A., Cardoso, M. J., & Cardoso, J. S. (Feb. 2012). INbreast: Toward a full-field digital mammographic database. Academic Radiology, 19(2), 236–248. https://doi.org/10.1016/j.acra.2011.09.014.

  37. Lopez, M. A. G. (2012). Bcdr: a breast cancer digital repository,., Accessed: Nov. 21, 2023. [Online]. Available: https://www.semanticscholar.org/paper/BCDR-%3A-A-BREAST-CANCER-DIGITAL-REPOSITORY-Lopez-Posada/315bbca257ac6c1201b3ec855e594297985549ab.

  38. Oliveira, J. E. E., Gueld, M. O., Araújo, A. D. A., Ott, B., & Deserno, T. M. Towards a Standard Reference Database for Computer-aided Mammography.

  39. Transfer learning and fine-tuning | TensorFlow Core Accessed: Sep. 27, 2023. [Online]. Available: https://www.tensorflow.org/tutorials/images/transfer_learning?hl=en.

Download references

Acknowledgements

The authors thank Joanne Chin for her invaluable assistance in the language editing of this article.

Funding

REO-A is partially supported by the National Institute of Health/National Cancer Institute Cancer Center Support Grant P30 CA008748.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: REO-A; Methodology: REO-A, SAG-CH and SA-J; Software: SA-J; Validation: JC-C; Formal analysis: SA-J and JC-C; Investigation: SA-J, SAG-CH, JC-C, CP-T, MB-L, LEG-L, CH-O, JHB-D and REO-A; Resources: MB-L, LEG-L, CH-O and JHB-D; Data Curation: SA-J; Writing - Original Draft: REO-A, SAG-CH and SA-J; Writing - Review & Editing: JC-C, CP-T, MB-L, LEG-L, CH-O and JHB-D; Visualization: SA-J; Supervision: REO-A and SAG-CH; Project administration: REO-A and SAG-CH; Funding acquisition: CP-T.

Corresponding authors

Correspondence to Susana Aideé González-Chávez or Rosa Elena Ochoa-Albíztegui.

Ethics declarations

Ethics approval and consent to participate

The Institutional Review Board approved this study and waived the requirement for patient consent.

Conflict of interest

The authors declare that they have no conflict of interest regarding the publication of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Acosta-Jiménez, S., González-Chávez, S.A., Camarillo-Cisneros, J. et al. Preliminary Results: Comparison of Convolutional Neural Network Architectures as an Auxiliary Clinical Tool Applied to Screening Mammography in Mexican Women. J. Med. Biol. Eng. (2024). https://doi.org/10.1007/s40846-024-00868-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40846-024-00868-6

Keywords

Navigation