Skip to main content
Log in

Core Clustering as a Tool for Tackling Noise in Cluster Labels

  • Published:
Journal of Classification Aims and scope Submit manuscript

Abstract

Real-world data sets often contain mislabelled entities. This can be particularly problematic if the data set is being used by a supervised classification algorithm at its learning phase. In this case, the accuracy of this classification algorithm, when applied to unlabelled data, is likely to suffer considerably. In this paper, we introduce a clustering-based method capable of reducing the number of mislabelled entities in data sets. Our method can be summarised as follows: (i) cluster the data set; (ii) select the entities that have the most potential to be assigned to correct clusters; (iii) use the entities of the previous step to define the core clusters and map them to the labels using a confusion matrix; (iv) use the core clusters and our cluster membership criterion to correct the labels of the remaining entities. We perform numerous experiments to validate our method empirically using k-nearest neighbour classifiers as a benchmark. We experiment with both synthetic and real-world data sets with different proportions of mislabelled entities. Our experiments demonstrate that the proposed method produces promising results. Thus, it could be used as a preprocessing data correction step of a supervised machine learning algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Angluin, D., & Laird, P. (1988). Learning from noisy examples. Machine Learning, 2(4), 343–370.

    Google Scholar 

  • Arbelaitz, O., Gurrutxaga, I., Muguerza, J., Pérez, J. M., Perona, I. (2013). An extensive comparative study of cluster validity indices. Pattern Recognition, 46(1), 243–256.

    Google Scholar 

  • Ball, G.H., & Hall, D.J. (1967). A clustering technique for summarizing multivariate data. Behavioral Science, 12(2), 153–155.

    Google Scholar 

  • Bock, H-H. (2008). Origins and extensions of the k-means algorithm in cluster analysis. Journal Electronique d’Histoire des Probabilités et de la Statistique (Electronic Journal for History of Probability and Statistics), 4, 2.

    MathSciNet  Google Scholar 

  • Bouveyron, C., & Girard, S. (2009). Robust supervised classification with mixture models: learning from data with uncertain labels. Pattern Recognition, 42(11), 2649–2658.

    MATH  Google Scholar 

  • De Amorim, R.C. (2016). A survey on feature weighting based K-Means algorithms. Journal of Classification, 33(2), 210–242. https://doi.org/10.1007/s00357-016-9208-4.

    MathSciNet  MATH  Google Scholar 

  • De Amorim, R.C., & Makarenkov, V. (2016). Applying subclustering and Lp distance in Weighted K-Means with distributed centroids. Neurocomputing, 173, 700–707.

    Google Scholar 

  • De Amorim, R.C., & Mirkin, B. (2011). Minkowski metric, feature weighting and anomalous cluster initializing in k-means clustering. Pattern Recognition, 45, 3.

    Google Scholar 

  • Frénay, B., & Verleysen, M. (2014). Classification in the presence of label noise: a survey. IEEE Transactions on Neural Networks and Learning Systems, 25(5), 845–869.

    MATH  Google Scholar 

  • Friedman, J.H., Bentley, J.L., Finkel, R.A. (1977). An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software (TOMS), 3(3), 209–226.

    MATH  Google Scholar 

  • Grira, N., Crucianu, M., Boujemaa, N. (2004). Unsupervised and semisupervised clustering: a brief survey. A review of machine learning techniques for processing multimedia content, Report of the MUSCLE European Network of Excellence (FP6), pp. 1001–1030.

  • Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182.

    MATH  Google Scholar 

  • Hickey, R.J. (1996). Noise modelling and evaluating learning from examples. Artificial Intelligence, 82(1), 157–179.

    MathSciNet  Google Scholar 

  • Hubert, L., & Arabie, P. (1985). Comparing partitions. Journal of Classification, 2(2), 193–218.

    MATH  Google Scholar 

  • Hughes, N.P., Roberts, S.J., Tarassenko, L. (2004). Semi-supervised learning of probabilistic models for ECG segmentation. In: Engineering in Medicine and Biology Society, 2004. IEMBS’04. 26th Annual International Conference of the IEEE. Vol. 1. IEEE, pp. 434–437.

  • Jain, A., Jin, R., Chitta, R. (2014). Semi-supervised clustering. Handbook of Cluster Analysis, pp. 1–35.

  • Jain, A.K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8), 651–666.

    Google Scholar 

  • Jones, E., Oliphant, T., Peterson, P., et al. (2001). SciPy: Open source scientific tools for Python. [Online; accessed 2016-11-28]. http://www.scipy.org/.

  • Kaufman, L., & Rousseeuw, P.J. (1990). Finding groups in data: an introduction to cluster analysis. Vol. 39. Wiley Online Library.

  • Lichman, M. (2013). UCI Machine Learning Repository. http://archive.ics.uci.edu/ml.

  • Macqueen, J., & et al. (1967). Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. vol. 1. 281–297. California, USA, pp. 14.

  • Maletic, J.I., & Marcus, A. (2000). Data cleansing: beyond integrity analysis. In: IQ. Citeseer, pp. 200–209.

  • MATLAB. (2013). version 8.10.0 (R2013a). Natick. Massachusetts: The MathWorks Inc.

    Google Scholar 

  • Mirkin, B.G. (2016). Clustering for data mining: a data recovery approach Vol. 3. Boca Raton: CRC Press.

    Google Scholar 

  • Orr, K. (1998). Data quality and systems theory. Communications of the ACM, 41(2), 66–71.

    Google Scholar 

  • Pechenizkiy, M., Tsymbal, A., Puuronen, S., Pechenizkiy, O. (2006). Class noise and supervised learning in medical domains: the effect of feature extraction. In: 19th IEEE symposium on computer-based medical systems (CBMS’06). IEEE, pp. 708–713.

  • Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1(1), 81–106.

    Google Scholar 

  • R Core Team. (2014). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. http://www.R-project.org.

    Google Scholar 

  • Redman, T.C. (1998). The impact of poor data quality on the typical enterprise. Communications of the ACM, 41(2), 79–82.

    Google Scholar 

  • Saeys, Y., Inza, I., Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19), 2507–2517.

    Google Scholar 

  • Saáez, J.A., Galar, M., Luengo, J., Herrera, F. (2014). Analyzing the presence of noise in multi-class problems: alleviating its influence with the Onevs- One decomposition. Knowledge and Information Systems, 38(1), 179–206.

    Google Scholar 

  • Settles, B. (1648). Active Learning Literature Survey. Computer Sciences Technical Report: University of WisconsinMadison.

    Google Scholar 

  • Steinley, D. (2006). K-means clustering: a half-century synthesis. British Journal of Mathematical and Statistical Psychology, 59(1), 1–34.

    MathSciNet  Google Scholar 

  • Struyf, A., Hubert, M., Rousseeuw, P., et al. (1997). Clustering in an object-oriented environment. Journal of Statistical Software, 1(4), 1–30.

    Google Scholar 

  • Wishart, D. (1998). Clustan. http://www.clustan.com/ (visited on 11/28/2016).

  • ZHU, X. (2006). Semi-supervised learning literature survey. Computer Science. University of Wisconsin-Madison, 2(3), 4.

    Google Scholar 

  • ZHU, X., & WU, X. (2004). Class noise vs. attribute noise: a quantitative study. Artificial Intelligence Review, 22(3), 177–210.

    MATH  Google Scholar 

Download references

Acknowledgments

BM thanks the Laboratory for Decision Choice and Analysis of the National Research University Higher School of Economics Moscow RF for partially supporting his work in the framework of the HSE University Basic Research Program funded by the Russian Academic Excellence Project ‘5-100’.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Renato Cordeiro de Amorim.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Amorim, R.C., Makarenkov, V. & Mirkin, B. Core Clustering as a Tool for Tackling Noise in Cluster Labels. J Classif 37, 143–157 (2020). https://doi.org/10.1007/s00357-019-9303-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00357-019-9303-4

Keywords

Navigation