Advertisement

The Concept and Properties of Sigma-if Neural Network

  • M. Huk
  • H. Kwasnicka

Abstract

Our recent works on artificial neural networks point to the possibility of extending the activation function of a standard artificial neuron model using the conditional signal accumulation technique, thus significantly enhancing the capabilities of neural networks. We present a new artificial neuron model, called Sigma-if, with the ability to dynamically tune the size of the decision space under consideration, resulting from a novel activation function. The paper discusses construction of the proposed neuron as well as training Sigma-if feedforward neural networks for well known sample classification problems.

Keywords

Hide Layer Neural Information Processing System Decision Space Classic Neural Network Classic Artificial Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Duch, W., Jankowski, N. (1999) Survey of neural transfer functions. Neural Computing Surveys 2: 163–212Google Scholar
  2. [2]
    Anthony, M., Bartlett, P.L. (1999) Neural networks learning: Theoretical foundations, 1st ed. Cambridge University Press, CambridgeGoogle Scholar
  3. [3]
    Hammer, B., Tino, P. (2003) Recurrent neural networks with small weights implement definite memory machines. Neural Computation 15(8): 1897–1929CrossRefGoogle Scholar
  4. [4]
    Murray, A.F., Edwards, P.J. (1992) Synaptic weight noise during MLP learning enhances fault-tolerance, generalisation and learning trajectory. Advances in Neural Information Processing Systems 5: 491–498Google Scholar
  5. [5]
    Hornik, K., Stinchcombe, M., White, H. (1989) Multilayer feedforward networks are universal approximators. Neural Networks 2: 359–366CrossRefGoogle Scholar
  6. [6]
    Durbin, R., Rumelhart, D. (1989) Product units: A computationally powerful and biologically plausible extension to backpropagation networks. Neural Computation 1: 133–142Google Scholar
  7. [7]
    Schmitt, M. (2002) On the complexity of computing and learning with multiplicative neural networks. Neural Computation 14: 241–301MATHCrossRefGoogle Scholar
  8. [8]
    Duch, W., Jankowski, N. (2001) Transfer functions: hidden possibilities for better neural networks. In Verleysen, M. (ed.) 9th European Symposium on Artificial Neural Networks. D-Facto, Brugge, pp. 81–94Google Scholar
  9. [9]
    Cohen, S., Intrator, N. (2002) A hybrid projection based and radial basis function architecture: initial values and global optimization. Pattern Analysis & Applications 5: 113–120MathSciNetCrossRefGoogle Scholar
  10. [10]
    Schmitt, M. (2002) Neural networks with local receptive fields and superlinear VC dimension. Neural Computation 14(4): 919–956MATHCrossRefGoogle Scholar
  11. [11]
    Huk, M. (2004) The Sigma-if neural network as a method of dynamic selection of decision subspaces for medical reasoning systems. Journal of Medical Informatics & Technologies 7: 65–73Google Scholar
  12. [12]
    Ridella, S., Rovetta, S., Zunino, R. (1999) Representation and generalization properties of Class-Entropy Networks. IEEE Transactions on Neural Networks 10(1): 31–47CrossRefGoogle Scholar
  13. [13]
    Banarer, V., Perwass, C, Sommer, G. (2003) The hypersphere neuron. In: Verleysen, M. (ed.) 11th European Symposium on Artificial Neural Networks. D-Side, Brugge, pp. 469–474Google Scholar
  14. [14]
    Huk, M. (2003) Determining the relevancy of attributes in classification tasks through nondestructive elimination of interneuronal connections in a neural network (in Polish). Pozyskiwanie Wiedzy z Baz Danych 975: 138–147Google Scholar
  15. [15]
    Fahlman, S.E., Lebiere, C. (1990) The Cascade-Correlation learning architecture. Advances in Neural Information Processing Systems 2: 524–532Google Scholar
  16. [16]
    Kaufman, L., (1999) Solving the quadratic pro-gramming problem arising in support vector classification. In: Scholpkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods-Support Vector Learning. MIT Press, Boston, pp. 146–167Google Scholar
  17. [17]
    Tickle, A., Andrews, R., Golea, M., Diederich, J. (1998) The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks. IEEE Transactions on Neural Networks 9(6): 1057–1068.CrossRefGoogle Scholar
  18. [18]
    Craven, M.W., Shavlik, J.W. (1996) Extracting tree-structured representations of trained networks. Advances in Neural Information Processing Systems 8: 24–30Google Scholar

Copyright information

© Springer-Verlag/Wien 2005

Authors and Affiliations

  • M. Huk
    • 1
  • H. Kwasnicka
    • 1
  1. 1.Department of Computer ScienceWroclaw University of TechnologyPoland

Personalised recommendations