Journal of Medical Systems

, Volume 33, Issue 2, pp 141–153

Comparative Study on the Use of Analytical Software to Identify the Different Stages of Breast Cancer Using Discrete Temperature Data

  • Joanna M. Y. Tan
  • E. Y. K. Ng
  • Rajendra Acharya U.
  • Louis G. Keith
  • Jim Holmes
Original Paper

Abstract

Breast cancer is the second leading cause of death in women. It occurs when cells in the breast begin to grow out of control and invade nearby tissues or spread throughout the body. The limitations of mammography as a screening modality, especially in young women with denser breasts, necessitated the development of novel and more effective screening strategies with acceptable sensitivity and specificity. The aim of this study was to develop a feasible interpretive software system which was able to detect and classify breast cancer patients by employing techniques of different analytical software. The protocol described uses 6,000 pieces of thermal data collected from 16-sensors, eight placed on the surface of each breast. Data was collected every 5 min for the duration of the test period. Placement of sensors was accomplished with the use of a template design from information provided by the national tumor registry to insure that the information was collected in areas of the breast where most breast cancers develop. Data in this study was collected from 90 individuals exhibiting four different breast conditions, namely: normal, benign, cancer and suspected-cancer. The temperature data collected from these 16 sensors placed on the surface of each breast were fed as inputs to the classifiers. Comparisons were made on five different kinds of classifiers: back-propagation algorithm, probabilistic neural network, fuzzy (Sugeno-type), Gaussian mixture model and support vector machine. These classifiers were able to attain approximately 80% accuracy in classifying the four different diagnoses (normal, benign, cancer and suspected-cancer). Gaussian mixture model was the most sensitive classifier, achieving the highest sensitivity of 94.8%. Support vector machine was considered the best classifier as it was able to produce the most specific and accurate results. Based on these evaluations, this current effort shows the feasibility of applying analytical software techniques together with the real-time functional thermal analysis to develop a potential tool for the detection and classification of breast cancer.

Keywords

Breast cancer Neural networks Sensor Temperature Fuzzy Classifier Gaussian mixture model Support vector machine 

References

  1. 1.
    National Breast Cancer Foundation, INC. [accessed 2008 18th Feb]; available from: http://www.nationalbreastcancer.org/signs_and_symptoms/index.html.
  2. 2.
    American Cancer Society, Cancer facts and figures 2007 [accessed 2008 18th Feb]; available from: http://www.cancer.org/downloads/STT/CAFF2007PWSecured.pdf.
  3. 3.
    Sobti, A., Sobti, P. S., and Keith, L. G., Screening and diagnostic mammography: why the gold standard does not shine more brightly? Int. J. Fert. Womens Med. 50:199–206, 2005.Google Scholar
  4. 4.
    Breast Cancer in Young Women. [accessed 2008 18th Feb]; available from: http://www.clevelandclinic.org/health/healthinfo/docs/3000/3020.asp?index=10740.
  5. 5.
    Wright, H., Rim, J. L. A., Chellman-Jeffers, M., Patrick, R., Rybicki, L., Kim, J. et al., Magnetic resonance imaging as a diagnostic tool for breast cancer in premenopausal women. Am. J. Surg. 190:572–575, 2005.CrossRefGoogle Scholar
  6. 6.
    Lehman, C. D., Gatsonis, C., Kuhl, C. K., Hendrick, R. E., Pisano, E. D., Hanna, L. et al., MRI evaluation of the contralateral breast in women with recently diagnosed breast cancer. N. Engl. J. Med. 356(13):1295–1303, 2007.CrossRefGoogle Scholar
  7. 7.
    Warner, E., Plewes, D. B., Hill, K. A., Causer, P. A., Zubovits, J. T., Jong, R. A. et al., Surveillance of BRCA1 and BRCA2 mutation carriers with magnetic resonance imaging, ultrasound, mammography, and clinical breast examination. JAMA. 292(11):1317–1325, 2004.CrossRefGoogle Scholar
  8. 8.
    Kriege, M., Brekelmans, C. T., Boetes, C., Besnard, P. E., Zonderland, H. M., Obdeijn, I. M. et al., Efficacy of MRI and mammography for breast cancer screening in women with a familial or genetic predisposition. N. Engl. J. Med. 351(5):427–437, 2004.CrossRefGoogle Scholar
  9. 9.
    Lehman, C. D., Blume, J. D., Weatherall, P., Thickman, D., Hylton, N., Warner, E. et al., International Breast MRI Consortium Working Group screening women at high risk for breast cancer with mammography and magnetic resonance imaging. Cancer. 103(9):1898–1905, 2005.CrossRefGoogle Scholar
  10. 10.
    Local cancer specialists agree with breast MRIs for high-risk patients. 6th April 2007 [accessed 2008 9th Feb]; available from: http://www.herald-review.com/articles/2007/04/06/news/local_news/1022515.txt.
  11. 11.
    Keith, L. G., Oleszczuk, J. J., and Laguens, M., Circadian rhythm chaos: A new breast cancer marker. Int. J. Fert. Women’s Med. 46:238–247, 2001.Google Scholar
  12. 12.
    Salhab, M., Al Sarakbi, W., and Mokbel, K., The evolving role of the dynamic thermal analysis in the early detection of breast cancer. Int. Semin. Surg. Oncol. 2:8, 2005, DOI 10.1186/1477-7800-2-8. (http://www.issoonline.com/content/2/1/8).CrossRefGoogle Scholar
  13. 13.
    Gros, C., and Bourjat, M. G. P., Prognosis and post therapeutic follow-up of breast cancers by thermography. In: Aarts, N. J. M., Gautherine, M., and Ring, E. F. J. (Eds.), Thermography (pp. 77–90). Basel: Karger, 1975.Google Scholar
  14. 14.
    Gautherine, M. C. G., Contribution of infrared thermography to early diagnosis, pretherapeutic prognosis and post-irradiation follow-up of breast carcinomas. Med. Mundi. 21:135–149, 1976.Google Scholar
  15. 15.
    Ng, E. Y. K., Fok, S. C., Ng, F. C., and Sim, L. S., Computerized detection of breast cancer with artificial intelligence and thermograms. Int. J. Med. Eng. Technol. 26(4):152–157, 2002.CrossRefGoogle Scholar
  16. 16.
    Salhab, M., Keith, L. G., Laguens, M., Reeves, W., and Mokbel, K., The potential role of dynamic thermal analysis in breast cancer detection. Int. Semin. Surg. Oncol. 3:8, 2006.CrossRefGoogle Scholar
  17. 17.
    Ng, E. Y.-K., Tan, M. S., Lockwood, S., and Keith, L. G., ANN based classification of breast cancer with discrete temperature screening: Facts and myths, part-IV. In: Suri, J. S., Rangayyan, R., and Laxminarayan, S. (Eds.), Emerging Technologies In Breast Imaging And Mammography (pp. 1–20). USA/Canada: American Scientific Publishers.Google Scholar
  18. 18.
    Ng, E. Y.-K., Acharya U. R., Keith, L. G., and Lockwood, S., Detection and classification of breast cancer using neural classifiers with first warning thermal sensors. Inf. Sci. 177(20):4526–4538, 2007.CrossRefGoogle Scholar
  19. 19.
    Lifeline Biotechnologies, Inc, Florida, USA. [Last accessed Feb 2008]; available from: http://www.lbti.com/firstwarningsystgem.asp#top
  20. 20.
    Wasserman, P. D., Advanced methods in neural computing. Van Nostrand Reinhold, New York, USA, 1993.MATHGoogle Scholar
  21. 21.
    Nozaki, I. T., Selecting fuzzy if-then rules with forgetting in fuzzy classification systems. J. Japan Soc. Fuzzy Theory Syst. 6(3):585–602, 1994.Google Scholar
  22. 22.
    Ishibuchi, M., and Tanaka, Construction of fuzzy classification systems using genetic algorithms. J. Japan Soc Fuzzy Theory Syst. 7(5):1022–1040, 1995.MathSciNetGoogle Scholar
  23. 23.
    George, K., and Bo, Y., Fuzzy sets and fuzzy logic: Theory and applications. Prentice Hall, India, 1995.MATHGoogle Scholar
  24. 24.
    Bishop, C. M., Pattern recognition and machine learning. Springer, Berlin, 2006.MATHGoogle Scholar
  25. 25.
    Reynolds, D. A., and Rose, R. C., Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Trans. Speech Audio Process. 3:72–83, 1995.CrossRefGoogle Scholar
  26. 26.
    Seo, C., Lee, K. Y., and Lee, J., GMM based on local PCA for speaker identification. Electron. Lett. 37:1486–1488, 2001.CrossRefGoogle Scholar
  27. 27.
    Burgess, C. J. C., A tutorial on support vector machines for pattern recognition. Data Mining And Knowledge Discovery. 2(2):1–47, 1998.CrossRefGoogle Scholar
  28. 28.
    Vapnik, V., Statistical learning theory. Willey, New York, 1998.MATHGoogle Scholar
  29. 29.
    David, V., and Sanchez, A., Advanced support vector machines and kernel methods. Neurocomputing. 55:5–20, 2003.CrossRefGoogle Scholar
  30. 30.
    Wang, L., Support vector machines: Theory and applications. Springer, New York, 2005.MATHGoogle Scholar
  31. 31.
    Muller, K. R., Mika, S., Ratsch, G., Tsuda, K., and Scholkopf, B., An introduction to Kernel based learning algorithms. IEEE Trans. Neural Netw. 12:181–201, 2001.CrossRefGoogle Scholar
  32. 32.
    Hsu, C. W., and Lin, C. J., A comparison of methods for multi-class support vector machines. IEEE Trans. Neural Netw. 13:415–425, 2002.CrossRefGoogle Scholar
  33. 33.
    Weston, J., and Watkins, C., Multi-class support vector machines. Technical report CSD-TR-98-04, Department of Computer Science, Royal Holloway, University of London, Egham, TW20 0EX, UK, 1998.Google Scholar
  34. 34.
    . Lei H., Govindaraju V., Half-against-half multi-class support vector machines. Proceeding Sixth International Workshop on Multiple Classifier Systems (MCS’05), Berlin, Germany: Springer-Verlag, pp 156–164, 2001.Google Scholar
  35. 35.
    Platt, J. C., Chrisianini, N., and Shawe-Taylor, J., Large margin DAGs for multiclass classification. Adv. Neural. Inf. Process. Syst. 12:547–553, 2000, MIT.Google Scholar
  36. 36.
    Schoonjans, F. MedCalc Online Help, Help-topics: Statistics, ROC. MedClac software and receiver operating characteristics (ROC) [accessed 17th Feb 2008]; available from: http://www.medcalc.be/
  37. 37.
    DeLeo, J. M., Receiver Operating Characteristic Laboratory (ROCLAB): Software for developing decision strategies that account for uncertainty management in artificial neural network decision-making. Proceedings of Second International Symposium on Uncertainty Modeling and Analysis, 25–28 Apr 1993 pp. 141–144, 1993.Google Scholar
  38. 38.
    Downey, T. J. Jr, Meyer, D. J., Price, R. K., and Spitznagel, E. L., Using the receiver operating characteristic to assess the performance of neural classifiers. Neural Netw. 5:3642–3646, 1999.Google Scholar
  39. 39.
    Homer, M. J., Mammographic interpretation: a practical approach (pp. 4–5). McGraw-Hill, New York, 1997.Google Scholar
  40. 40.
    McKenna, R. S., The abnormal mammogram radiographic findings, diagnostic options, pathology, and stage of cancer diagnosis. Cancer. 74:244–255, 1994.CrossRefGoogle Scholar
  41. 41.
    Davis, P. L., Staiger, M. J., and Harris, K. B., Breast cancer measurements with magnetic resonance imaging, ultrasonography, and mammography. Breast Cancer Res. Treat. 37:1–9, 1996.CrossRefGoogle Scholar
  42. 42.
    MacMahon, B., Cole, P., Lin, T. et al., Age at first birth and breast cancer risk. Bull World Health Organ. 43:209–221, 1970.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2008

Authors and Affiliations

  • Joanna M. Y. Tan
    • 1
  • E. Y. K. Ng
    • 1
    • 5
  • Rajendra Acharya U.
    • 2
  • Louis G. Keith
    • 3
  • Jim Holmes
    • 4
  1. 1.School of Mechanical and Aerospace Engineering, College of EngineeringNanyang Technological UniversitySingaporeSingapore
  2. 2.School of Engineering, Division of ECENgee Ann PolytechnicSingaporeSingapore
  3. 3.Department of Obstetrics and Gynecology, Feinberg School of MedicineNorthwestern UniversityChicagoUSA
  4. 4.Lifeline Biotechnologies, Inc.RenoUSA
  5. 5.Adjunct NUH Scientist, Office of Biomedical ResearchNational University Hospital of SingaporeSingaporeSingapore

Personalised recommendations