Advertisement

Applied Intelligence

, Volume 40, Issue 1, pp 143–153 | Cite as

Accelerating FCM neural network classifier using graphics processing units with CUDA

  • Lin Wang
  • Bo Yang
  • Yuehui Chen
  • Zhenxiang Chen
  • Hongwei Sun
Article

Abstract

With the advancement in experimental devices and approaches, scientific data can be collected more easily. Some of them are huge in size. The floating centroids method (FCM) has been proven to be a high performance neural network classifier. However, the FCM is difficult to learn from a large data set, which restricts its practical application. In this study, a parallel floating centroids method (PFCM) is proposed to speed up the FCM based on the Compute Unified Device Architecture, especially for a large data set. This method performs all stages as a batch in one block. Blocks and threads are responsible for evaluating classifiers and performing subtasks, respectively. Experimental results indicate that the speed and accuracy are improved by employing this novel approach.

Keywords

Neural networks classifier Parallel floating centroids method Compute unified device architecture Graphics processing units 

Notes

Acknowledgements

This work was supported by National Key Technology Research and Development Program of the Ministry of Science and Technology under Grant 2012BAF12B07-3. National Natural Science Foundation of China under Grant Nos. 61173078, 61203105, 61173079, 61070130, 60903176. Provincial Natural Science Foundation for Outstanding Young Scholars of Shandong under Grant No. JQ200820. Shandong Provincial Natural Science Foundation, China, under Grant Nos. ZR2010FM047, ZR2012FQ016, ZR2012FM010. Program for New Century Excellent Talents in University under Grant No. NCET-10-0863.

References

  1. 1.
    Lopez J, Suykens Johan AK (2011) First and second order SMO algorithms for LS-SVM classifiers. Neural Process Lett 33(1):31–44 CrossRefGoogle Scholar
  2. 2.
    Vapnik VN (1995) The nature of statistical learning theory. Springer, Berlin CrossRefzbMATHGoogle Scholar
  3. 3.
    Chua KS (2003) Efficient computations for large least square support vector machine classifiers. Pattern Recognit Lett 24(1–3):75–80 CrossRefzbMATHGoogle Scholar
  4. 4.
    Qinlan JR (1986) Introduction of decision trees. Mach Learn 1(1):86–106 Google Scholar
  5. 5.
    Freund Y (1995) Boosting a weak learning algorithm by majority. Inf Comput 121(2):256–285 CrossRefzbMATHMathSciNetGoogle Scholar
  6. 6.
    Hongjun L, Rudy S, Huan L (1996) Effect data mining using neural networks. IEEE Trans Knowl Data Eng 8(6):957–961 CrossRefGoogle Scholar
  7. 7.
    Misraa BB, Dehurib S, Dashc PK, Pandad G (2008) A reduced and comprehensible polynomial neural network for classification. Pattern Recognit Lett 29(12):1705–1712 CrossRefGoogle Scholar
  8. 8.
    Daqi G, Yan J (2005) Classification methodologies of multilayer perceptrons with sigmoid activation functions. Pattern Recognit 38(10):1469–1482 CrossRefGoogle Scholar
  9. 9.
    Hassan YF (2011) Rough sets for adapting wavelet neural networks as a new classifier system. Appl Intell 35(2):260–268 CrossRefGoogle Scholar
  10. 10.
    Castano A, Fernandez-Navarro F, Hervas-Martinez C et al (2011) Neuro-logistic models based on evolutionary generalized radial basis function for the microarray gene expression classification problem. Neural Process Lett 34(2):117–131 CrossRefGoogle Scholar
  11. 11.
    An S-Y, Kang J-G, Choi W-S, Oh S-Y (2011) A neural network based retrainable framework for robust object recognition with application to mobile robotics. Appl Intell 35(2):190–210 CrossRefGoogle Scholar
  12. 12.
    Avci E (2012) An expert target recognition system using a genetic wavelet neural network. Appl Intell 37(4):475–487 CrossRefGoogle Scholar
  13. 13.
    Yaakob SN, Jain L (2012) An insect classification analysis based on shape features using quality threshold ARTMAP and moment invariant. Appl Intell 37(1):12–30 CrossRefGoogle Scholar
  14. 14.
    Venkatesh YV, Kumar Raja S (2003) On the classification of multispectral satellite images using the multilayer perceptron. Pattern Recognit 36(9):2161–2175 CrossRefzbMATHGoogle Scholar
  15. 15.
    Verma B, McLeod P, Klevansky A (2009) A novel soft cluster neural network for the classification of suspicious areas in digital mammograms. Pattern Recognit 42(9):1845–1852 CrossRefzbMATHGoogle Scholar
  16. 16.
    Kang S, Park S (2009) A fusion neural network classifier for image classification. Pattern Recognit Lett 30(9):789–793 CrossRefMathSciNetGoogle Scholar
  17. 17.
    Wang L, Yang B, Chen Y et al (2012) Improvement of neural network classifier using floating centroids. Knowl Inf Syst 31(3):433–454 CrossRefGoogle Scholar
  18. 18.
    Czibula G, Gergely Czibula I, Dan Gaceanu R (2011) Intelligent data structures selection using neural networks. Knowl Inf Syst 34(1):171–192 CrossRefGoogle Scholar
  19. 19.
    Zhang L, Wang L, Wang X, Liu K, Abraham A (2012) Research of neural network classifier based on FCM and PSO for breast cancer classification. In: HAIS 2012, part I. Lecture notes in computer science, vol 7208, pp 647–654 Google Scholar
  20. 20.
    Czarnowski I, Jedrzejowicz P (2012) Agent-based approach to RBF network training with floating centroids. In: The 4th international conference on computational collective intelligence, pp 453–462 Google Scholar
  21. 21.
    Kwedlo W, Bandurski K (2006) A parallel differential evolution algorithm for neural network training. In: International symposium on parallel computing in electrical engineering, pp 319–324 CrossRefGoogle Scholar
  22. 22.
    Srinivasan N, Vaidehi V (2005) Cluster computing for neural network based amomaly detection. In: 13th IEEE international conference on networks jointly held with the 7th IEEE Malaysia international conference on communications, pp 130–134 Google Scholar
  23. 23.
    Garcia-Nieto J, Alba E (2012) Parallel multi-swarm optimizer for gene selection in DNA microarrays. Appl Intell 37(2):255–266 CrossRefGoogle Scholar
  24. 24.
    Tobias P, Peter V, Wolfgang P et al (2009) GPU accelerated Monte Carlo simulation of the 2D and 3D Ising model. J Comput Phys 228(12):4468–4477 CrossRefzbMATHGoogle Scholar
  25. 25.
    Guorui Y, Jie T, Shouping Z et al (2008) Fast cone-beam CT image reconstruction using GPU hardware. J X-Ray Sci Technol 16(4):225–234 Google Scholar
  26. 26.
    Harvey MJ, De Fabritiis G (2009) An implementation of the smooth particle Mesh Ewald method on GPU hardware. J Chem Theory Comput 5(9):2371–2377 CrossRefGoogle Scholar
  27. 27.
    Guillem P, Garry C, Olcott PD et al (2009) Fast, accurate and shift-varying line projections for iterative reconstruction using the GPU. IEEE Trans Med Imaging 28(3):435–445 CrossRefGoogle Scholar
  28. 28.
    van der Laan Wladimir J, Jalba Andrei C, Roerdink Jos BTM (2011) Accelerating wavelet lifting on graphics hardware using CUDA. IEEE Trans Parallel Distrib Syst 22(1):132–146 CrossRefGoogle Scholar
  29. 29.
    Kennedy J, Eberhart RC (1995) A new optimizer using paritcle swarm theory. In: Proc. the sixth int. symposium on micromachine and human science, pp 39–43 Google Scholar
  30. 30.
    Wang K, Zheng YJ (2012) A new particle swarm optimization algorithm for fuzzy optimization of armored vehicle scheme design. Appl Intell 37(4):520–526 CrossRefGoogle Scholar
  31. 31.
    Hartigan JA, Wong MA (1979) A k-means clustering algorithm. Appl Stat 28(1):100–108 CrossRefzbMATHGoogle Scholar
  32. 32.
    Bridle JS (1990) Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In: Neuralcomputing: algorithms, architectures and applications. Springer, Berlin, pp 227–236 CrossRefGoogle Scholar
  33. 33.
    Dietterich TG, Bakiri G (1995) Solving multiclass learning problems via error-correcting output codes. J Artif Intell Res 2(1):263–286 zbMATHGoogle Scholar
  34. 34.
    Che S, Boyer M, Meng J, Tarjan D, Sheaffer JW, Skadron K (2008) A performance study of general-purpose applications on graphics processors using CUDA. J Parallel Distrib Comput 68(10):1370–1380 CrossRefGoogle Scholar
  35. 35.
    Fang W, Lau KK, Lu M, Xiao X, Lam CK, Yang PY, He B, Luo Q, Sande PV, Yang K (2008) Parallel Data Mining on Graphics Processors. Technical Report HKUSTCS08 Google Scholar
  36. 36.
    Wu J, Hong B (2011) An efficient k-means algorithm on CUDA. In: 2011 IEEE international symposium on parallel & distributed processing, workshops and phd forum, vol 2, pp 1740–1749 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Lin Wang
    • 1
  • Bo Yang
    • 1
    • 2
  • Yuehui Chen
    • 1
  • Zhenxiang Chen
    • 1
  • Hongwei Sun
    • 3
  1. 1.Shandong Provincial Key Laboratory of Network based Intelligent ComputingUniversity of JinanJinanChina
  2. 2.School of InformaticsLinyi UniversityLinyiChina
  3. 3.School of Mathematical SciencesUniversity of JinanJinanChina

Personalised recommendations