A Review on Enhancement to Standard K-Means Clustering

  • Mohit Kushwaha
  • Himanshu YadavEmail author
  • Chetan AgrawalEmail author
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 100)


In clustering, objects that have similar nature will lie collectively in the same group called cluster, i.e. same cluster and if they are of distinct nature, then they will be belonging to other cluster of similar nature. The standard k-means is a prime and basic procedure of the clustering but it suffers from some shortcomings, these are as follows (1) Its performance depends on initial clusters which are selected randomly in standard k-means. (2) The basic algorithm, standard k-means, has computational time of 0(NKL) where N represents the number of data points, K represents the number of distinct clusters and L is the number of different iterations that is time consuming or too much expensive. (3) The standard k-means algorithm also has the dead unit problems that result in clusters which contains no data points that is empty cluster. (4) In standard k-means if we do random initialization which causes to converse at local minima. Several enhancement techniques were introduced to enhance the efficiency of the basic k-means algorithm but most of the algorithms were focus only on one of the above drawbacks at a time. In this review paper, we consider initial centre as well as computational complexity problem along with dead unit problem in single algorithm.


Cluster analysis Similarity proximity Enhanced k-means Standard k-mean 


  1. 1.
    Datta S, Datta S (2003) Comparisons and validation of statistical clustering techniques for micro array gene expression data. Bioinformatics 19(4):459–466CrossRefGoogle Scholar
  2. 2.
    Jain AK, Dubes RC (1988) Algorithms for clustering data. Prentice-Hall, Inc.Google Scholar
  3. 3.
    Fahim AM, Salem AM, Torkey FA, Ramadan MA (2006) An efficient enhanced k-means clustering algorithm. J Zhejiang Univ-Sci A 7(10):1626–1633CrossRefGoogle Scholar
  4. 4.
    Birch ZT (1996) An efficient data clustering method for very large databases. In: Zhang T, Ramakrishnan R, Livny M (eds) Proceedings of the 1996 ACM SIGMOD international conference on management of data (SIGMOD’96). ACM, New York, pp 103–114Google Scholar
  5. 5.
    Rakesh A, Gehrke J, Dimitrios G, Raghavan P (1998) Automatic subspace clustering of high dimensional data for data mining applications, vol 27. ACMGoogle Scholar
  6. 6.
    Campos MM, Milenova BL, McCracken MA, Enhanced k-means clustering, September 152009. US Patent 7590642Google Scholar
  7. 7.
    Nazeer KAA, Sebastian MP (2009) Improving the accuracy and efficiency of the k-means clustering algorithm. In: Proceedings of the world congress on engineering, vol 1, pp 1–3Google Scholar
  8. 8.
    Theiler JP, Gisler G (1997) Contiguity-enhanced k-means clustering algorithm for unsupervised multispectral image segmentation. In: Optical science, engineering and instrumentation’ 97. International Society for Optics and Photonics, pp 108–118Google Scholar
  9. 9.
    Rauf A, Sheeba SM, Khusro S, Javed H (2012) Enhanced k-mean clustering algorithm to reduce number of iterations and time complexity. Middle-East J Sci Res 12(7):959–963Google Scholar
  10. 10.
    Tian Z, Ramakrishnan R, Birch ML (1996) An efficient data clustering method for very large databases. In: ACM sigmod record, vol 25. ACM, pp 103–114Google Scholar
  11. 11.
    Yedla M, Pathakota SR, Srinivasa TM (2010) Enhancing k-means clustering algorithm with improved initial center. Int J Comput Sci Inf Technol 1(2):121–125Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Computer Science and EngineeringRITS BhopalBhopalIndia

Personalised recommendations