Advertisement

Comparing Support Vector Machines and Feed-forward Neural Networks with Similar Parameters

  • Enrique Romero
  • Daniel Toppo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4224)

Abstract

From a computational point of view, the main differences between SVMs and FNNs are (1) how the number of elements of their respective solutions (SVM-support vectors/FNN-hidden units) is selected and (2) how the (both hidden-layer and output-layer) weights are found. Sequential FNNs, however, do not show all of these differences with respect to SVMs, since the number of hidden units is obtained as a consequence of the learning process (as for SVMs) rather than fixed a priori. In addition, there exist sequential FNNs where the hidden-layer weights are always a subset of the data, as usual for SVMs. An experimental study on several benchmark data sets, comparing several aspects of SVMs and the aforementioned sequential FNNs, is presented. The experiments were performed in the (as much as possible) same conditions for both models. Accuracies were found to be very similar. Regarding the number of support vectors, sequential FNNs constructed models with less hidden units than SVMs. In addition, all the hidden-layer weights in the FNN models were also considered as support vectors by SVMs. The computational times were lower for SVMs, with absence of numerical problems.

Keywords

Support Vector Activation Function Radial Basis Function Network Hide Unit Relevance Vector Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press Inc., New York (1995)Google Scholar
  2. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, NY (1995)MATHGoogle Scholar
  3. Chen, S., Cowan, C.F.N., Grant, P.M.: Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks. IEEE Transactions on Neural Networks 2(2), 302–309 (1991)CrossRefGoogle Scholar
  4. Vincent, P., Bengio, Y.: Kernel Matching Pursuit. Machine Learning 48(1-3), 165–187 (2002); Special Issue on New Methods for Model Combination and Model SelectionGoogle Scholar
  5. Romero, E., Alquézar, R.: A Sequential Algorithm for Feed-forward Neural Networks with Optimal Coefficients and Interacting Frequencies. Neurocomputing 69(13-15), 1540–1552 (2006)CrossRefGoogle Scholar
  6. Kwok, T.Y., Yeung, D.Y.: Constructive Algorithms for Structure Learning in Feedforward Neural Networks for Regression Problems. IEEE Transactions on Neural Networks 8(3), 630–645 (1997)CrossRefGoogle Scholar
  7. Chang, C.C., Lin, C.J.: LIBSVM: A Library for Support Vector Machines (2002), http://www.csie.ntu.edu.tw/~cjlin/libsvm
  8. Tipping, M.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research 1, 211–244 (2001)MATHCrossRefMathSciNetGoogle Scholar
  9. Valdés, J., García, R.: A Model for Heterogeneous Neurons and Its Use in Configuring Neural Networks for Classification Problems. In: Cabestany, J., Mira, J., Moreno-Díaz, R. (eds.) IWANN 1997. LNCS, vol. 1240, pp. 237–246. Springer, Heidelberg (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Enrique Romero
    • 1
  • Daniel Toppo
    • 2
  1. 1.Departament de Llenguatges i Sistemes InformàticsUniversitat Politècnica de CatalunyaBarcelonaSpain
  2. 2.Swiss Federal Institute of TechnologyI&C School of Computer and Communication SciencesOnexSwitzerland

Personalised recommendations