A Novel Clustering Algorithm Based on a Non-parametric “Anti-Bayesian” Paradigm
The problem of clustering, or unsupervised classification, has been solved by a myriad of techniques, all of which depend, either directly or implicitly, on the Bayesian principle of optimal classification. To be more specific, within a Bayesian paradigm, if one is to compare the testing sample with only a single point in the feature space from each class, the optimal Bayesian strategy would be to achieve this based on the distance from the corresponding means or central points in the respective distributions. When this principle is applied in clustering, one would assign an unassigned sample into the cluster whose mean is the closest, and this can be done in either a bottom-up or a top-down manner. This paper pioneers a clustering achieved in an “Anti-Bayesian” manner, and is based on the breakthrough classification paradigm pioneered by Oommen et al. The latter relies on a radically different approach for classifying data points based on the non-central quantiles of the distributions. Surprisingly and counter-intuitively, this turns out to work equally or close-to-equally well to an optimal supervised Bayesian scheme, which thus begs the natural extension to the unexplored arena of clustering. Our algorithm can be seen as the Anti-Bayesian counter-part of the well-known \(k\)-means algorithm (The fundamental Anti-Bayesian paradigm need not just be used to the \(k\)-means principle. Rather, we hypothesize that it can be adapted to any of the scores of techniques that is indirectly based on the Bayesian paradigm.), where we assign points to clusters using quantiles rather than the clusters’ centroids. Extensive experimentation (This paper contains the prima facie results of experiments done on one and two-dimensional data. The extensions to multi-dimensional data are not included in the interest of space, and would use the corresponding multi-dimensional Anti-Naïve-Bayes classification rules given in .) demonstrates that our Anti-Bayesian clustering converges fast and with precision results competitive to a \(k\)-means clustering.
KeywordsCluster Algorithm Cluster Performance Cluster Strategy Monte Carlo Error Bayesian Paradigm
Unable to display preview. Download preview PDF.
- 2.Jain, A.K., Dubes, R.C.: Algorithms for Clustering Dats. Prentice Hall, Englewood Cliffs (1988)Google Scholar
- 4.Ankerst, M., Breunig, M.M., Peter Kriegel, H., Sander, J.: Optics: ordering points to identify the clustering structure, pp. 49–60. ACM Press (1999)Google Scholar
- 5.Ester, M., Peter Kriegel, H., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise, pp. 226–231. AAAI Press (1996)Google Scholar
- 6.Murtagh, F., Contreras, P.: Methods of hierarchical clustering. CoRR abs/1105.0121 (2011)Google Scholar
- 10.Hyndman, R.J., Fan, Y.: Sample quantiles in statistical packages. American Statistician 50, 361–365 (1996)Google Scholar