Advertisement

Clustering as an Example of Optimizing Arbitrarily Chosen Objective Functions

  • Marcin Budka
Part of the Studies in Computational Intelligence book series (SCI, volume 457)

Abstract

This paper is a reflection upon a common practice of solving various types of learning problems by optimizing arbitrarily chosen criteria in the hope that they are well correlated with the criterion actually used for assessment of the results. This issue has been investigated using clustering as an example, hence a unified view of clustering as an optimization problem is first proposed, stemming from the belief that typical design choices in clustering, like the number of clusters or similarity measure can be, and often are suboptimal, also from the point of view of clustering quality measures later used for algorithm comparison and ranking. In order to illustrate our point we propose a generalized clustering framework and provide a proof-of-concept using standard benchmark datasets and two popular clustering methods for comparison.

Keywords

clustering cluster analysis optimization genetic algorithms particle swarm optimization general-purpose optimization techniques 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Asuncion, A., Newman, D.: UCI Machine Learning Repository (2007)Google Scholar
  2. 2.
    Birge, B.: PSOt – a particle swarm optimization toolbox for use with Matlab. In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium SIS03 Cat No03EX706, pp. 182–186 (2003)Google Scholar
  3. 3.
    Budka, M., Gabrys, B.: Correntropy-based density-preserving data sampling as an alternative to standard cross-validation. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (July 2010)Google Scholar
  4. 4.
    Budka, M., Gabrys, B.: Ridge regression ensemble for toxicity prediction. Procedia Computer Science 1(1), 193–201 (2010)CrossRefGoogle Scholar
  5. 5.
    Davies, D., Bouldin, D.: A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence (2), 224–227 (1979)Google Scholar
  6. 6.
    Dempster, A., Laird, N., Rubin, D.: Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society. Series B (Methodological) 39(1), 1–38 (1977)MathSciNetMATHGoogle Scholar
  7. 7.
    Dubes, R.: How many clusters are best?-an experiment. Pattern Recognition 20(6), 645–663 (1987)CrossRefGoogle Scholar
  8. 8.
    Duda, R., Hart, P., Stork, D.: Pattern Classification, 2nd edn. John Wiley & Sons, New York (2001)MATHGoogle Scholar
  9. 9.
    Duin, R., Juszczak, P., Paclik, P., Pekalska, E., de Ridder, D., Tax, D., Verzakov, S.: PR–Tools 4.1. A MATLAB Toolbox for Pattern Recognition (2007), http://prtools.org
  10. 10.
    Dunn, J.: Well-separated clusters and optimal fuzzy partitions. Journal of Cybernetics 4(1), 95–104 (1974)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Fletcher, R.: Practical methods of optimization, 2nd edn. Wiley (2000)Google Scholar
  12. 12.
    Fraser, A.: Simulation of genetic systems by automatic digital computers vi. epistasis. Australian Journal of Biological Sciences 13(2), 150–162 (1960)Google Scholar
  13. 13.
    Hamming, R.: Error detecting and error correcting codes. Bell System Technical Journal 29(2), 147–160 (1950)MathSciNetGoogle Scholar
  14. 14.
    Jaccard, P.: Etude comparative de la distribution florale dans une portion des Alpes et du Jura (1901)Google Scholar
  15. 15.
    Jain, A., Murty, M., Flynn, P.: Data clustering: a review. ACM Computing Surveys (CSUR) 31(3), 264–323 (1999)CrossRefGoogle Scholar
  16. 16.
    Jenssen, R., Erdogmus, D., Hild, K.E., Príncipe, J.C., Eltoft, T.: Optimizing the Cauchy-Schwarz PDF Distance for Information Theoretic, Non-parametric Clustering. In: Rangarajan, A., Vemuri, B.C., Yuille, A.L. (eds.) EMMCVPR 2005. LNCS, vol. 3757, pp. 34–45. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  17. 17.
    Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995)Google Scholar
  18. 18.
    Kohavi, R., Deng, A., Frasca, B., Longbotham, R., Walker, T., Xu, Y.: Trustworthy online controlled experiments: Five puzzling outcomes explained. In: KDD 2012, Beijing China, August 12-16 (2012)Google Scholar
  19. 19.
    MacQueen, J., et al.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, California, USA, vol. 1, p. 14 (1967)Google Scholar
  20. 20.
    Sibson, R.: Slink: an optimally efficient algorithm for the single-link cluster method. The Computer Journal 16(1), 30–34 (1973)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Bournemouth UniversityPooleUK

Personalised recommendations