Improving Neural Network Classifier Using Gradient-Based Floating Centroid Method

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1143)


Floating centroid method (FCM) offers an efficient way to solve a fixed-centroid problem for the neural network classifiers. However, evolutionary computation as its optimization method restrains the FCM to achieve satisfactory performance for different neural network structures, because of the high computational complexity and inefficiency. Traditional gradient-based methods have been extensively adopted to optimize the neural network classifiers. In this study, a gradient-based floating centroid (GDFC) method is introduced to address the fixed centroid problem for the neural network classifiers optimized by gradient-based methods. Furthermore, a new loss function for optimizing GDFC is introduced. The experimental results display that GDFC obtains promising classification performance than the comparison methods on the benchmark datasets.


Neural network classifier Classification Loss function Floating Centroid Method 



This work was supported by National Natural Science Foundation of China under Grant No. 61872419, No. 61573166, No. 61572230, No. 61873324, No. 81671785, No. 61672262. Shandong Provincial Natural Science Foundation No. ZR2019MF040, No. ZR2018LF005. Shandong Provincial Key R&D Program under Grant No. 2019GGX101041, No. 2018GGX101048, No. 2016ZDJS01A12, No. 2016GGX101001, No. 2017CXZC1206. Taishan Scholar Project of Shandong Province, China, under Grant No. tsqn201812077.


  1. 1.
    Bridle, J.S.: Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In: Soulié, F.F., Hérault, J. (eds.) Neurocomputing. NATO ASI Series, vol. 68, pp. 227–236. Springer, Heidelberg (1990). Scholar
  2. 2.
    Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Int. Res. 2(1), 263–286 (1995)zbMATHGoogle Scholar
  3. 3.
    Jiang, G., He, H., Yan, J., Xie, P.: Multiscale convolutional neural networks for fault diagnosis of wind turbine gearbox. IEEE Trans. Ind. Electron. PP, 1 (2018)Google Scholar
  4. 4.
    Kamilaris, A., Prenafeta-Bold, F.X.: A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 156(3), 312–322 (2018). Scholar
  5. 5.
    Nazari, M., Oroojlooy, A., Snyder, L., Takac, M.: Reinforcement learning for solving the vehicle routing problem. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 9839–9849. Curran Associates, Inc. (2018)Google Scholar
  6. 6.
    Wang, L., Yang, B., Chen, Y., Zhang, X., Orchard, J.: Improving neural-network classifiers using nearest neighbor partitioning. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2255–2267 (2017)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Wang, L., et al.: Improvement of neural network classifier using floating centroids. Knowl. Inf. Syst. 31(3), 433–454 (2012)CrossRefGoogle Scholar
  8. 8.
    Wang, L., Yang, B., Chen, Z., Abraham, A., Peng, L.: A novel improvement of neural network classification using further division of partition space. In: Mira, J., Álvarez, J.R. (eds.) IWINAC 2007. LNCS, vol. 4527, pp. 214–223. Springer, Heidelberg (2007). Scholar
  9. 9.
    Wibowo, A., Wiryawan, P.W., Nuqoyati, N.I.: Optimization of neural network for cancer microRNA biomarkers classification. J. Phys: Conf. Ser. 1217, 012124 (2019)Google Scholar
  10. 10.
    Wong, Y.J., Arumugasamy, S.K., Jewaratnam, J.: Performance comparison of feedforward neural network training algorithms in modeling for synthesis of polycaprolactone via biopolymerization. Clean Technol. Environ. Policy 20(9), 1971–1986 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Shandong Provincial Key Laboratory of Network Based Intelligent ComputingUniversity of JinanJinanChina

Personalised recommendations