Advertisement

Computationally Efficient Radial Basis Function

  • Adedamola Wuraola
  • Nitish Patel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11302)

Abstract

We introduced a Square-law based RBF kernel called SQuare RBF (SQ-RBF) which is computationally efficient and effective due to the elimination of the exponential term. In contrast to the Gaussian RBF, SQ-RBF requires smaller computational operation count and direct implementation without a call to higher order library. The derivative of the SQ-RBF is linear which will improve gradient computation and makes its applicability in multilayer perceptron neural network attractive. In experiments, SQ-RBF lead not only to faster learning but also requires significant low neurons than Gaussian RBF on networks. On an average, we recorded a speed-up in training time of about 8% for SQ-RBF based networks without affecting the overall generalizability of the network. SQ-RBF uses about 10% fewer neurons than Gaussian RBF hence making it very attractive.

Keywords

Activation function Artificial Neural Networks RBF Function approximation 

References

  1. 1.
    Jouppi, N.P., et al.: In-Datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12. ACM, New York (2017)Google Scholar
  2. 2.
    Broomhead, D.S., Lowe, D.: Multivariable functional interpolation and adaptive networks. Complex Syst. 2(3), 321–355 (1988)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Yojna, A., Singhal, A., Bansal, A.: A study of applications of RBF network. Int. J. Comput. Appl. 94(2), 17–20 (2014)Google Scholar
  4. 4.
    Matthias, R., Eskofier, B.M.: An approximation of the Gaussian RBF kernel for efficient classification with SVMs. Pattern Recognit. Lett. 84, 107–113 (2016)CrossRefGoogle Scholar
  5. 5.
    Xu, B., Shen, F., Zhao, J., Zhang, T.: A self-adaptive growing method for training compact RBF networks. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.S. (eds.) ICONIP 2017. LNCS, vol. 10634, pp. 74–81. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-70087-8_8CrossRefGoogle Scholar
  6. 6.
    Reese, J., Zaranek, S.: GPU programming in MATLAB. MathWorks News and Notes, pp. 22–25 (2012)Google Scholar
  7. 7.
    Wuraola, A., Patel, N.: SQNL: a new computationally efficient activation function. IEEE International Joint Conference on Neural Network. IEEE (2018)Google Scholar
  8. 8.
    Duch, W., Jankowski, N.: Transfer functions: hidden possibilities for better neural networks. ESANN, pp. 81–94. De-facto, Brugge (2001)Google Scholar
  9. 9.
    Hoffmann, G.A.: Adaptive transfer functions in radial basis function (RBF) networks. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004. LNCS, vol. 3037, pp. 682–686. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24687-9_102CrossRefGoogle Scholar
  10. 10.
    Meng, X., Rozycki, P., Qiao, J., Wilamowski, B.: Nonlinear system modeling using RBF networks for industrial application. IEEE Trans. Ind. Inform. 14(3), 931–940 (2018)CrossRefGoogle Scholar
  11. 11.
    Yanbing, L., Zhao, J., Xiao, Y.: C-RBFNN: a user retweet behavior prediction method for hotspot topics based on improved RBF neural network. Neurocomputing 275, 733–746 (2018)CrossRefGoogle Scholar
  12. 12.
    Włodzisław, D., Jankowski, N.: Survey of neural transfer functions. Neural Comput. Surv. 2(1), 163–212 (1999)Google Scholar
  13. 13.
    Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). In: International Conference on Learning Representation (2016)Google Scholar
  14. 14.
    Lennart, L.: System Identification. Prentice-Hall, Boston (1998)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringThe University of AucklandAucklandNew Zealand

Personalised recommendations