Advertisement

Abstract

Neural networks use their hidden layers to transform input data into linearly separable data clusters, with a linear or a perceptron type output layer making the final projection on the line perpendicular to the discriminating hyperplane. For complex data with multimodal distributions this transformation is difficult to learn. Projection on k ≥2 line segments is the simplest extension of linear separability, defining much easier goal for the learning process. The difficulty of learning non-linear data distributions is shifted to separation of line intervals, making the main part of the transformation much simpler. For classification of difficult Boolean problems, such as the parity problem, linear projection combined with k-separability is sufficient.

Keywords

Boolean Function Parity Problem Linear Projection Separable Function Linear Separability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Duda, R.O., Hart, P.E., Stork, D.: Patter Classification. J. Wiley & Sons, New York (2001)Google Scholar
  2. 2.
    Duch, W.: Similarity based methods: a general framework for classification, approximation and association. Control and Cybernetics 29, 937–968 (2000)MATHMathSciNetGoogle Scholar
  3. 3.
    Duch, W., Adamczak, R., Diercksen, G.: Classification, association and pattern completion using neural similarity based methods. Applied Math. & Comp. Science 10, 101–120 (2000)Google Scholar
  4. 4.
    Witten, I., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)MATHGoogle Scholar
  5. 5.
    Jankowski, N., Gra̧bczewski, K., Duch, W., Naud, A.R.: Ghostminer data mining software. Technical report (2000-2005), http://www.fqspl.com.pl/ghostminer/
  6. 6.
    Stork, D., Allen, J.: How to solve the n-bit parity problem with two hidden units. Neural Networks 5, 923–926 (1992)CrossRefGoogle Scholar
  7. 7.
    Minor, J.: Parity with two layer feedforward nets. Neural Networks 6, 705–707 (1993)CrossRefGoogle Scholar
  8. 8.
    Setiono, R.: On the solution of the parity problem by a single hidden layer feedforward neural network. Neurocomputing 16, 225–235 (1997)CrossRefGoogle Scholar
  9. 9.
    Lavretsky, E.: On the exact solution of the parity-n problem using ordered neural networks. Neural Networks 13, 643–649 (2000)CrossRefGoogle Scholar
  10. 10.
    Arslanov, M., Ashigaliev, D., Ismail, E.: N-bit parity ordered neural networks. Neurocomputing 48, 1053–1056 (2002)MATHCrossRefGoogle Scholar
  11. 11.
    Liu, D., Hohil, M., Smith, S.: N-bit parity neural networks: new solutions based on linear programming. Neurocomputing 48, 477–488 (2002)MATHCrossRefGoogle Scholar
  12. 12.
    Torres-Moreno, J., Aguilar, J., Gordon, M.: The minimum number of errors in the n-parity and its solution with an incremental neural network. Neural Proc. Letters 16, 201–210 (2002)MATHCrossRefGoogle Scholar
  13. 13.
    Iyoda, E., Nobuhara, H., Hirota, K.: A solution for the n-bit parity problem using a single translated multiplicative neuron. Neural Processing Letters 18, 233–238 (2003)CrossRefGoogle Scholar
  14. 14.
    Wilamowski, B., Hunter, D.: Solving parity-n problems with feedforward neural network. In: Int. Joint Conf. on Neural Networks (IJCNN 2003), Portland, Oregon, vol. I, pp. 2546–2551 (2003)Google Scholar
  15. 15.
    Duch, W.: Visualization of hidden node activity in neural networks: I. Visualization methods. II. Application to RBF networks. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 38–49. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    Duch, W.: Coloring black boxes: visualization of neural network decisions. In: Int. Joint Conf. on Neural Networks, Portland, Oregon, vol. I, pp. 1735–1740. IEEE Press, Los Alamitos (2003)Google Scholar
  17. 17.
    Duch, W., Jankowski, N.: Survey of neural transfer functions. Neural Computing Surveys 2, 163–213 (1999)Google Scholar
  18. 18.
    Duch, W., Jankowski, N.: Taxonomy of neural transfer functions. In: International Joint Conference on Neural Networks, Como, Italy, vol. III, pp. 477–484. IEEE Press, Los Alamitos (2000)Google Scholar
  19. 19.
    Dietterich, T., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. Journal Of Artificial Intelligence Research 2, 263–286 (1995)MATHGoogle Scholar
  20. 20.
    Zuyev, Y.: Asymptotics of the logarithm of the number of threshold functions of the algebra of logic. Soviet Mathematics Doklady 39 (1989)Google Scholar
  21. 21.
    Duch, W., Adamczak, R., Gra̧bczewski, K.: A new methodology of extraction, optimization and application of crisp and fuzzy logical rules. IEEE Transactions on Neural Networks 12, 277–306 (2001)CrossRefGoogle Scholar
  22. 22.
    Ngom, A., Stojmenovic, I., Zunic, J.: On the Number of Multilinear Partitions and the Computing Capacity of Multiple-Valued Multiple-Threshold Perceptrons. IEEE Transactions on Neural Networks 14, 469–477 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Włodzisław Duch
    • 1
    • 2
  1. 1.Department of InformaticsNicolaus Copernicus UniversityToruńPoland
  2. 2.School of Computer EngineeringNanyang Technological UniversitySingapore

Personalised recommendations