Efficient K-Means Clustering Using Accelerated Graphics Processors

  • S. A. Arul Shalom
  • Manoranjan Dash
  • Minh Tue
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5182)

Abstract

We exploit the parallel architecture of the Graphics Processing Unit (GPU) used in desktops to efficiently implement the traditional K-means algorithm. Our approach in clustering avoids the need for data and cluster information transfer between the GPU and CPU in between the iterations. In this paper we present the novelties in our approach and techniques employed to represent data, compute distances, centroids and identify the cluster elements using the GPU. We measure performance using the metric: computational time per iteration. Our implementation of k-means clustering on an Nvidia 5900 graphics processor is 4 to 12 times faster than the CPU and 7 to 22 times faster on the Nvidia 8500 graphics processor for various data sizes. We also achieved 12 to 64 times speed gain on the 5900 and 20 to 140 times speed gains on the 8500 graphics processor in computational time per iteration for evaluations with various cluster sizes.

Keywords

K-means clustering GPGPU Computational efficiency 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Fluck, O., Aharon, S., Cremers, D., Rousson, M.: GPU histogram computation. In: International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH (2006)Google Scholar
  2. 2.
    Cao, F., Tung, A.K.H., Zhou, A.: Scalable Clustering Using Graphics Processors. In: Yu, J.X., Kitsuregawa, M., Leong, H.-V. (eds.) WAIM 2006. LNCS, vol. 4016. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. 3.
    Hall, J.D., Hart, J.C.: GPU Acceleration of Iterative Clustering. In: The ACM Workshop on GPC on GPU & SIGRAPH (2004)Google Scholar
  4. 4.
    Richard, W.S., Lipchak, B.: OpenGL SuperBible. Sams Publishing (2005)Google Scholar
  5. 5.
    Göddeke, D.: Basic Math Tutorial. Retrieved from GPGPU (2007), http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
  6. 6.
    Göddeke, D.: Reduction Tutorial. Retrieved from GPGPU (2007), http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial2.html
  7. 7.
    NVIDIA. GPU Gems 2. ADDISON-WESLEY (2006)Google Scholar
  8. 8.
    Zhang, Q., Zhang, Y.: Hierarchical clustering of gene expression profiles with graphics hardware acceleration. Pattern Recognition Letters 27, 676–681 (2006)CrossRefGoogle Scholar
  9. 9.
    Owens, J.D., Luebke, D., Govindaraju, N., Harris, M., Krüger, J., Lefohn, A.E., Purcell, T.J.: A Survey of General-Purpose Computation on Graphics Hardware. In: Eurographics (2006)Google Scholar
  10. 10.
    Owens, J.D.: Streaming Architectures and Technology Trends. In: GPU Gems2, ch. 29, pp. 457–470 (2004)Google Scholar
  11. 11.
    Rost, R.J.: OpenGL® Shading Language, 2nd edn. Addison Wesley Professional, Reading (2006)Google Scholar
  12. 12.
    Intelligent Data Storage for Data-Intensive Analytical Systems. Real Data Set covtype Downloaded from D∙Star (2007), http://uisacad2.uis.edu/dstar/data/clusteringdata.html
  13. 13.
    Takizawa, H., Kobayashi, H.: Hierarchical parallel processing of large scale data clustering on a PC cluster with GPU co-processing. J. Supercomputing 36, 219–234 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • S. A. Arul Shalom
    • 1
  • Manoranjan Dash
    • 1
  • Minh Tue
    • 2
  1. 1.School of Computer EngineeringNanyang Technological UniversitySingapore
  2. 2.NUS High School of Mathematics and ScienceSingapore

Personalised recommendations