Advertisement

Core Clustering as a Tool for Tackling Noise in Cluster Labels

  • Renato Cordeiro de AmorimEmail author
  • Vladimir Makarenkov
  • Boris Mirkin
Article
  • 11 Downloads

Abstract

Real-world data sets often contain mislabelled entities. This can be particularly problematic if the data set is being used by a supervised classification algorithm at its learning phase. In this case, the accuracy of this classification algorithm, when applied to unlabelled data, is likely to suffer considerably. In this paper, we introduce a clustering-based method capable of reducing the number of mislabelled entities in data sets. Our method can be summarised as follows: (i) cluster the data set; (ii) select the entities that have the most potential to be assigned to correct clusters; (iii) use the entities of the previous step to define the core clusters and map them to the labels using a confusion matrix; (iv) use the core clusters and our cluster membership criterion to correct the labels of the remaining entities. We perform numerous experiments to validate our method empirically using k-nearest neighbour classifiers as a benchmark. We experiment with both synthetic and real-world data sets with different proportions of mislabelled entities. Our experiments demonstrate that the proposed method produces promising results. Thus, it could be used as a preprocessing data correction step of a supervised machine learning algorithm.

Keywords

Label noise Clustering k-means Core clustering Minkowski distance 

Notes

Acknowledgments

BM thanks the Laboratory for Decision Choice and Analysis of the National Research University Higher School of Economics Moscow RF for partially supporting his work in the framework of the HSE University Basic Research Program funded by the Russian Academic Excellence Project ‘5-100’.

References

  1. Angluin, D., & Laird, P. (1988). Learning from noisy examples. Machine Learning, 2(4), 343–370.Google Scholar
  2. Arbelaitz, O., Gurrutxaga, I., Muguerza, J., Pérez, J. M., Perona, I. (2013). An extensive comparative study of cluster validity indices. Pattern Recognition, 46(1), 243–256.Google Scholar
  3. Ball, G.H., & Hall, D.J. (1967). A clustering technique for summarizing multivariate data. Behavioral Science, 12(2), 153–155.Google Scholar
  4. Bock, H-H. (2008). Origins and extensions of the k-means algorithm in cluster analysis. Journal Electronique d’Histoire des Probabilités et de la Statistique (Electronic Journal for History of Probability and Statistics), 4, 2.MathSciNetGoogle Scholar
  5. Bouveyron, C., & Girard, S. (2009). Robust supervised classification with mixture models: learning from data with uncertain labels. Pattern Recognition, 42(11), 2649–2658.zbMATHGoogle Scholar
  6. De Amorim, R.C. (2016). A survey on feature weighting based K-Means algorithms. Journal of Classification, 33(2), 210–242.  https://doi.org/10.1007/s00357-016-9208-4.MathSciNetzbMATHGoogle Scholar
  7. De Amorim, R.C., & Makarenkov, V. (2016). Applying subclustering and Lp distance in Weighted K-Means with distributed centroids. Neurocomputing, 173, 700–707.Google Scholar
  8. De Amorim, R.C., & Mirkin, B. (2011). Minkowski metric, feature weighting and anomalous cluster initializing in k-means clustering. Pattern Recognition, 45, 3.Google Scholar
  9. Frénay, B., & Verleysen, M. (2014). Classification in the presence of label noise: a survey. IEEE Transactions on Neural Networks and Learning Systems, 25(5), 845–869.zbMATHGoogle Scholar
  10. Friedman, J.H., Bentley, J.L., Finkel, R.A. (1977). An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software (TOMS), 3(3), 209–226.zbMATHGoogle Scholar
  11. Grira, N., Crucianu, M., Boujemaa, N. (2004). Unsupervised and semisupervised clustering: a brief survey. A review of machine learning techniques for processing multimedia content, Report of the MUSCLE European Network of Excellence (FP6), pp. 1001–1030.Google Scholar
  12. Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182.zbMATHGoogle Scholar
  13. Hickey, R.J. (1996). Noise modelling and evaluating learning from examples. Artificial Intelligence, 82(1), 157–179.MathSciNetGoogle Scholar
  14. Hubert, L., & Arabie, P. (1985). Comparing partitions. Journal of Classification, 2(2), 193–218.zbMATHGoogle Scholar
  15. Hughes, N.P., Roberts, S.J., Tarassenko, L. (2004). Semi-supervised learning of probabilistic models for ECG segmentation. In: Engineering in Medicine and Biology Society, 2004. IEMBS’04. 26th Annual International Conference of the IEEE. Vol. 1. IEEE, pp. 434–437.Google Scholar
  16. Jain, A., Jin, R., Chitta, R. (2014). Semi-supervised clustering. Handbook of Cluster Analysis, pp. 1–35.Google Scholar
  17. Jain, A.K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8), 651–666.Google Scholar
  18. Jones, E., Oliphant, T., Peterson, P., et al. (2001). SciPy: Open source scientific tools for Python. [Online; accessed 2016-11-28]. http://www.scipy.org/.
  19. Kaufman, L., & Rousseeuw, P.J. (1990). Finding groups in data: an introduction to cluster analysis. Vol. 39. Wiley Online Library.Google Scholar
  20. Lichman, M. (2013). UCI Machine Learning Repository. http://archive.ics.uci.edu/ml.
  21. Macqueen, J., & et al. (1967). Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. vol. 1. 281–297. California, USA, pp. 14.Google Scholar
  22. Maletic, J.I., & Marcus, A. (2000). Data cleansing: beyond integrity analysis. In: IQ. Citeseer, pp. 200–209.Google Scholar
  23. MATLAB. (2013). version 8.10.0 (R2013a). Natick. Massachusetts: The MathWorks Inc.Google Scholar
  24. Mirkin, B.G. (2016). Clustering for data mining: a data recovery approach Vol. 3. Boca Raton: CRC Press.Google Scholar
  25. Orr, K. (1998). Data quality and systems theory. Communications of the ACM, 41(2), 66–71.Google Scholar
  26. Pechenizkiy, M., Tsymbal, A., Puuronen, S., Pechenizkiy, O. (2006). Class noise and supervised learning in medical domains: the effect of feature extraction. In: 19th IEEE symposium on computer-based medical systems (CBMS’06). IEEE, pp. 708–713.Google Scholar
  27. Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1(1), 81–106.Google Scholar
  28. R Core Team. (2014). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. http://www.R-project.org.Google Scholar
  29. Redman, T.C. (1998). The impact of poor data quality on the typical enterprise. Communications of the ACM, 41(2), 79–82.Google Scholar
  30. Saeys, Y., Inza, I., Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19), 2507–2517.Google Scholar
  31. Saáez, J.A., Galar, M., Luengo, J., Herrera, F. (2014). Analyzing the presence of noise in multi-class problems: alleviating its influence with the Onevs- One decomposition. Knowledge and Information Systems, 38(1), 179–206.Google Scholar
  32. Settles, B. (1648). Active Learning Literature Survey. Computer Sciences Technical Report: University of WisconsinMadison.zbMATHGoogle Scholar
  33. Steinley, D. (2006). K-means clustering: a half-century synthesis. British Journal of Mathematical and Statistical Psychology, 59(1), 1–34.MathSciNetGoogle Scholar
  34. Struyf, A., Hubert, M., Rousseeuw, P., et al. (1997). Clustering in an object-oriented environment. Journal of Statistical Software, 1(4), 1–30.Google Scholar
  35. Wishart, D. (1998). Clustan. http://www.clustan.com/ (visited on 11/28/2016).
  36. ZHU, X. (2006). Semi-supervised learning literature survey. Computer Science. University of Wisconsin-Madison, 2(3), 4.Google Scholar
  37. ZHU, X., & WU, X. (2004). Class noise vs. attribute noise: a quantitative study. Artificial Intelligence Review, 22(3), 177–210.zbMATHGoogle Scholar

Copyright information

© The Classification Society 2019

Authors and Affiliations

  • Renato Cordeiro de Amorim
    • 1
    Email author
  • Vladimir Makarenkov
    • 2
  • Boris Mirkin
    • 3
    • 4
  1. 1.School of Computer Science and Electronic EngineeringUniversity of EssexColchesterUK
  2. 2.Département d’informatiqueUniversité du Qubec à MontréalMontrealCanada
  3. 3.Department of Computer Science and Information SystemsBirkbeck University of LondonLondonUK
  4. 4.Department of Data Analysis and Machine IntelligenceNational Research University Higher School of EconomicsMoscowRussian Federation

Personalised recommendations