Advertisement

Dynamic recursive tree-based partitioning for malignant melanoma identification in skin lesion dermoscopic images

  • Massimo Aria
  • Antonio D’Ambrosio
  • Carmela Iorio
  • Roberta Siciliano
  • Valentina Cozza
Regular Article
  • 68 Downloads

Abstract

In this paper, multivalued data or multiple values variables are defined. They are typical when there is some intrinsic uncertainty in data production, as the result of imprecise measuring instruments, such as in image recognition, in human judgments and so on. So far, contributions in symbolic data analysis literature provide data preprocessing criteria allowing for the use of standard methods such as factorial analysis, clustering, discriminant analysis, tree-based methods. As an alternative, this paper introduces a methodology for supervised classification, the so-called Dynamic CLASSification TREE (D-CLASS TREE), dealing simultaneously with both standard and multivalued data as well. For that, an innovative partitioning criterion with a tree-growing algorithm will be defined. Main result is a dynamic tree structure characterized by the simultaneous presence of binary and ternary partitions. A real world case study will be considered to show the advantages of the proposed methodology and main issues of the interpretation of the final results. A comparative study with other approaches dealing with the same types of data will be also shown. The comparison highlights that, even if the results are quite similar in terms of error rates, the proposed D-CLASS tree returns a more interpretable tree-based structure.

Keywords

Classification trees Multivalued data Melanoma recognition Predictive learning 

Notes

Acknowledgements

Authors would like to thank Prof. A. Baroni of the Campania University “Luigi Vanvitelli” (Italy) for kindly providing us the Skin lesions data set. Authors would like to thank two anonymous reviewers whose comments highly contribute to improve the quality of the manuscript.

References

  1. Argenziano G, Fabbrocini G, Carli P, De Giorgi V, Sammarco E, Delfino M (1998) Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: comparison of the abcd rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Archiv Dermatol 134(12):1563–1570CrossRefGoogle Scholar
  2. Bergmann B, Hommel G (1988) Improvements of general multiple test procedures for redundant systems of hypogheses. In: Bauer P, Hommel G, Sonnemann E (eds) Multiple hypothesenprüfung (Multiple hypotheses testing). Springer, Berlin, pp 100–115Google Scholar
  3. Bashir S, Qamar U, Khan FH (2014) Heterogeneous classifiers fusion for dynamic breast cancer diagnosis using weighted vote based ensemble. Qual Quant 49:2061–2076CrossRefGoogle Scholar
  4. Billard L, Diday E (2003) From the statistics of data to the statistics of knowledge: symbolic data analysis. J Am Stat Assoc 98(462):470–487MathSciNetCrossRefGoogle Scholar
  5. Bock HH, Diday E (2012) Analysis of symbolic data: exploratory methods for extracting statistical information from complex data. Springer Science & Business Media, BerlinzbMATHGoogle Scholar
  6. Bono A, Tomatis S, Bartoli C, Tragni G, Radaelli G, Maurichi A, Marchesini R (1999) The abcd system of melanoma detection. Cancer 85(1):72–77CrossRefGoogle Scholar
  7. Borgoni R, Berrington A (2013) Evaluating a sequential tree-based procedure for multivariate imputation of complex missing data structures. Qual Quant 47(4):1991–2008CrossRefGoogle Scholar
  8. Box GE, Cox DR (1964) An analysis of transformations. J R Stat Soc Ser B 26(2):211–252Google Scholar
  9. Bradley AP (1997) The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognit. 30(7):1145–1159CrossRefGoogle Scholar
  10. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140zbMATHGoogle Scholar
  11. Breiman L (2001) Random forests. Mach Learn 45(1):5–32CrossRefzbMATHGoogle Scholar
  12. Breiman L, Friedman J, Olshen RA, Stone CJ (1984) Classification and regression trees. CRC Press, Boca RatonzbMATHGoogle Scholar
  13. Brier GW (1950) Verification of forecasts expressed in terms of probability. Mon Weather Rev 78(1):1–3CrossRefGoogle Scholar
  14. Cappelli C, Mola F, Siciliano R (2002) A statistical approach to growing a reliable honest tree. Comput Stat Data Anal 38(3):285–299MathSciNetCrossRefzbMATHGoogle Scholar
  15. Celebi ME, Kingravi HA, Uddin B, Iyatomi H, Aslandogan YA, Stoecker WV, Moss RH (2007) A methodological approach to the classification of dermoscopy images. Comput Med Imag Graph 31(6):362–373CrossRefGoogle Scholar
  16. Couso I, Sánchez L (2011) Mark-recapture techniques in statistical tests for imprecise data. Int J Approx Reason 52(2):240–260MathSciNetCrossRefzbMATHGoogle Scholar
  17. Cozza V, Guarracino MR, Maddalena L, Baroni A (2011) Dynamic clustering detection through multi-valued descriptors of dermoscopic images. Stat Med 30(20):2536–2550MathSciNetCrossRefGoogle Scholar
  18. D’Ambrosio A, Aria M, Siciliano R (2012) Accurate tree-based missing data imputation and data fusion within the statistical learning paradigm. J Classif 29(2):227–258MathSciNetCrossRefzbMATHGoogle Scholar
  19. D’Ambrosio A, Aria M, Iorio C, Siciliano R (2017) Regression trees for multivalued numerical response variables. Expert Syst Appl 69:21–28CrossRefGoogle Scholar
  20. Dietterich TG (2000) Ensemble methods in machine learning. In: Kittler J, Roli F (eds) Multiple Classifier Systems. MCS 2000. Lecture Notes in Computer Science, vol 1857. Springer, Berlin, pp 1–15Google Scholar
  21. Ferraro MB, Coppi R, Rodríguez GG, Colubi A (2010) A linear regression model for imprecise response. Int J Approx Reason 51(7):759–770MathSciNetCrossRefzbMATHGoogle Scholar
  22. Ferraro MB, Colubi A, González-Rodríguez G, Coppi R (2011) A determination coefficient for a linear regression model with imprecise response. Environmetrics 22(4):516–529MathSciNetCrossRefGoogle Scholar
  23. Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recognit Lett 30(1):27–38CrossRefGoogle Scholar
  24. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139MathSciNetCrossRefzbMATHGoogle Scholar
  25. Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32(200):675–701CrossRefzbMATHGoogle Scholar
  26. Garcia S, Herrera F (2008) An extension on ”statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J Mach Learn Res 9(Dec):2677–2694zbMATHGoogle Scholar
  27. Gil MÁ, Montenegro M, González-Rodríguez G, Colubi A, Casals MR (2006) Bootstrap approach to the multi-sample test of means with imprecise data. Comput Stat Data Anal 51(1):148–162MathSciNetCrossRefzbMATHGoogle Scholar
  28. Górecki T, Krzyśko M, Waszak L, Wołyński W (2016) Selected statistical methods of data analysis for multivariate functional data. Stat Pap 59(1):1–30.  https://doi.org/10.1007/s00362-016-0757-8 MathSciNetzbMATHGoogle Scholar
  29. Hastie T, Tibshirani R, Friedman J, Franklin J (2005) The elements of statistical learning: data mining, inference and prediction. Math Intell 27(2):83–85Google Scholar
  30. Iman RL, Davenport JM (1980) Approximations of the critical region of the fbietkan statistic. Commun Stat Theory Methods 9(6):571–595CrossRefzbMATHGoogle Scholar
  31. Iorio C, Frasso G, DAmbrosio A, Siciliano R (2016) Parsimonious time series clustering using p-splines. Expert Syst Appl 52:26–38CrossRefGoogle Scholar
  32. Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621CrossRefzbMATHGoogle Scholar
  33. Lange T, Mosler K, Mozharovskyi P (2014) Fast nonparametric classification based on data depth. Stat Pap 55:49–69MathSciNetCrossRefzbMATHGoogle Scholar
  34. Limam M, Diday E, Winsberg S (2003) Symbolic class description with interval data. J Symb Data Anal 1(1)Google Scholar
  35. Maglogiannis I, Kosmopoulos DI (2006) Computational vision systems for the detection of malignant melanoma. Oncol Rep 15(4):1027–1032Google Scholar
  36. Makinde OS (2016) Classification rules based on distribution functions of functional depth. Stat Pap.  https://doi.org/10.1007/s00362-016-0841-0
  37. Mballo C, Diday E (2005) Decision trees on interval valued variables. Electron J Symb Data Anal 3(1):8–18Google Scholar
  38. Mosler K, Mozharovskyi P (2015) Fast dd-classification of functional data. Stat Pap.  https://doi.org/10.1007/s00362-015-0738-3
  39. Nachbar F, Stolz W, Merkle T, Cognetta AB, Vogt T, Landthaler M, Bilek P, Braun-Falco O, Plewig G (1994) The abcd rule of dermatoscopy: high prospective value in the diagnosis of doubtful melanocytic skin lesions. J Am Acad Dermatol 30(4):551–559CrossRefGoogle Scholar
  40. Otsu N (1975) A threshold selection method from gray-level histograms. Automatica 11(285–296):23–27Google Scholar
  41. Périnel E, Lechevallier Y (2000) Symbolic discrimination rules. In: Bock HH, Diday E (eds) Analysis of symbolic data: exploratory methods for extracting statistical information from complex data. Springer, Berlin, pp 244–265Google Scholar
  42. Siciliano R, Aria M, Conversano C (2004) Harvesting trees: methods, software and applications. In: Proceedings in Computational Statistics: 16th Symposium of IASC. COMPSTAT2004, held PragueGoogle Scholar
  43. Siciliano R, Tutore VA, Aria M, D’Ambrosio A (2010) Trees with leaves and without leaves. In: Proceedings of the 45th Scientific Meeting of the Italian Statistical Society. Italian Statistical SocietyGoogle Scholar
  44. Situ N, Yuan X, Zouridakis G (2011) Assisting main task learning by heterogeneous auxiliary tasks with applications to skin cancer screening. J Mach Learn Res 15:688Google Scholar
  45. Tarpey T, Kinateder KK (2003) Clustering functional data. J Classif 20(1):093–114MathSciNetCrossRefzbMATHGoogle Scholar
  46. Tutore VA, Siciliano R, Aria M (2007) Conditional classification trees using instrumental variables. In: Berthold M, Shawe-Taylor J, Lavrač N (eds) Advances in intelligent data analysis VII. IDA 2007. Lecture Notes in Computer Science, vol 4723. Springer, Berlin, pp 163–173Google Scholar
  47. Viertl R (2003) Statistical inference with imprecise data. Encyclopedia of life support systems. UNESCO, Paris. Online publication: http://www.eolss.unesco.org
  48. Viertl R (1997) On statistical inference for non-precise data. Environmetrics 8(5):541–568CrossRefGoogle Scholar
  49. Yang MS, Hwang PY, Chen DH (2004) Fuzzy clustering algorithms for mixed feature variables. Fuzzy Sets Syst 141(2):301–317MathSciNetCrossRefzbMATHGoogle Scholar
  50. Zadrozny B, Elkan C (2001) Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In: Proceedings of the ICML. Citeseer, vol 1, pp 609–616Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Economics and StatisticsUniversity of Naples Federico IINaplesItaly
  2. 2.Department of Industrial EngineeringUniversity of Naples Federico IINaplesItaly
  3. 3.Department of LawParthenope University of NaplesNaplesItaly

Personalised recommendations