Advertisement

Perceptrons

Chapter

Abstract

The perceptron [38], also referred to as a McCulloch-Pitts neuron or linear threshold gate, is the earliest and simplest neural network model. Rosenblatt used a single-layer perceptron for the classification of linearly separable patterns.

References

  1. 1.
    Amin, M. F., & Murase, K. (2009). Single-layered complex-valued neural network for real-valued classification problems. Neurocomputing, 72, 945–955.CrossRefGoogle Scholar
  2. 2.
    Amit, D. J., Wong, K. Y. M., & Campbell, C. (1989). Perceptron learning with sign-constrained weights. Journal of Physics A: Mathematical and General, 22, 2039–2045.CrossRefMathSciNetGoogle Scholar
  3. 3.
    Auer, P., Hebster, M., & Warmuth, M. K. (1996). Exponentially many local minima for single neurons. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in neural information processing systems (Vol. 8, pp. 316–322). Cambridge, MA: The MIT Press.Google Scholar
  4. 4.
    Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21, 786–795.Google Scholar
  5. 5.
    Bolle, D., & Shim, G. M. (1995). Nonlinear Hebbian training of the perceptron. Network, 6, 619–633.CrossRefMATHGoogle Scholar
  6. 6.
    Bouboulis, P., & Theodoridis, S. (2011). Extension of Wirtinger’s calculus to reproducing kernel Hilbert spaces and the complex kernel LMS. IEEE Transactions on Signal Processing, 59(3), 964–978.CrossRefMathSciNetGoogle Scholar
  7. 7.
    Castillo, E., Fontenla-Romero, O., Alonso-Betanzos, A., & Guijarro-Berdinas, B. (2002). A global optimum approach for one-layer neural networks. Neural Computation, 14(6), 1429–1449.Google Scholar
  8. 8.
    Cavallanti, G., Cesa-Bianchi, N., & Gentile, C. (2007). Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69, 143–167.CrossRefGoogle Scholar
  9. 9.
    Cesa-Bianchi, N., Conconi, A., & Gentile, C. (2005). A second-order Perceptron algorithm. SIAM Journal on Computing, 34(3), 640–668.CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    Chen, J. L., & Chang, J. Y. (2000) Fuzzy perceptron neural networks for classifiers with numerical data and linguistic rules as inputs. IEEE Transactions on Fuzzy Systems, 8(6), 730–745.Google Scholar
  11. 11.
    Crammer, K., Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). Online passive aggressive algorithms. Journal of Machine Learning Research, 7, 551–585.Google Scholar
  12. 12.
    Diene, O., & Bhaya, A. (2009). Perceptron training algorithms designed using discrete-time control Liapunov functions. Neurocomputing, 72, 3131–3137.CrossRefGoogle Scholar
  13. 13.
    Duch, W. (2005). Uncertainty of data, fuzzy membership functions, and multilayer perceptrons. IEEE Transactions on Neural Networks, 16(1), 10–23.Google Scholar
  14. 14.
    Duda, R. O., & Hart, P. E. (1973). Pattern classification and scene analysis. New York: Wiley.MATHGoogle Scholar
  15. 15.
    Eitzinger, C., & Plach, H. (2003). A new approach to perceptron training. IEEE Transactions on Neural Networks, 14(1), 216–221.Google Scholar
  16. 16.
    Fernandez-Delgado, M., Ribeiro, J., Cernadas, E., & Ameneiro, S. B. (2011). Direct parallel perceptrons (DPPs): Fast analytical calculation of the parallel perceptrons weights with margin control for classification tasks. IEEE Transactions on Neural Networks, 22(11), 1837–1848.Google Scholar
  17. 17.
    Fontenla-Romero, O., Guijarro-Berdinas, B., Perez-Sanchez, B., & Alonso-Betanzos, A. (2010). A new convex objective function for the supervised learning of single-layer neural networks. Pattern Recognition, 43(5), 1984–1992.Google Scholar
  18. 18.
    Frean, M. (1992). A thermal perceptron learning rule. Neural Computation, 4(6), 946–957.Google Scholar
  19. 19.
    Freund, Y., & Schapire, R. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37, 277–296.CrossRefMATHGoogle Scholar
  20. 20.
    Gallant, S. I. (1990). Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, 1(2), 179–191.Google Scholar
  21. 21.
    Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.MathSciNetGoogle Scholar
  22. 22.
    Gori, M., & Maggini, M. (1996). Optimal convergence of on-line backpropagation. IEEE Transactions on Neural Networks, 7(1), 251–254.Google Scholar
  23. 23.
    Hassoun, M. H., & Song, J. (1992). Adaptive Ho-Kashyap rules for perceptron training. IEEE Transactions on Neural Networks, 3(1), 51–61.Google Scholar
  24. 24.
    Ho, Y. C., & Kashyap, R. L. (1965). An algorithm for linear inequalities and its applications. IEEE Transactions of Electronic Computers, 14, 683–688.CrossRefGoogle Scholar
  25. 25.
    Ho, C. Y.-F., Ling, B. W.-K., Lam, H.-K., & Nasir, M. H. U. (2008). Global convergence and limit cycle behavior of weights of perceptron. IEEE Transactions on Neural Networks, 19(6), 938–947.Google Scholar
  26. 26.
    Ho, C. Y. -F., Ling, B. W. -K., & Iu, H.H.-C. (2010). Invariant set of weight of perceptron trained by perceptron training algorithm. IEEE Transactions on Systems, Man, and Cybernetics Part B, 40(6), 1521–1530.Google Scholar
  27. 27.
    Khardon, R., & Wachman, G. (2007). Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8, 227–248.MATHGoogle Scholar
  28. 28.
    Kivinen, J., Smola, A. J., & Williamson, R. C. (2004). Online learning with kernels. IEEE Transactions on Signal Processing, 52(8), 2165–2176.CrossRefMathSciNetGoogle Scholar
  29. 29.
    Krauth, W., & Mezard, M. (1987). Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20(11), 745–752.CrossRefMathSciNetGoogle Scholar
  30. 30.
    Legenstein, R., & Maass, W. (2008). On the classification capability of sign-constrained perceptrons. Neural Computation, 20, 288–309.Google Scholar
  31. 31.
    Li, Y., & Long, P. (2002). The relaxed online maximum margin algorithm. Machine Learning, 46, 361–387.CrossRefMATHGoogle Scholar
  32. 32.
    Mansfield, A. J. (1991). Training perceptrons by linear programming, NPL Report DITC 181/91. Teddington, UK: National Physical Laboratory.Google Scholar
  33. 33.
    Maass, W., Natschlaeger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.Google Scholar
  34. 34.
    Mays, C. H. (1963). Adaptive threshold logic. Ph.D. Thesis, Stanford University.Google Scholar
  35. 35.
    Muselli, M. (1997). On convergence properties of pocket algorithm. IEEE Transactions on Neural Networks, 8(3), 623–629.CrossRefMathSciNetGoogle Scholar
  36. 36.
    Nagaraja, G., & Bose, R. P. J. C. (2006). Adaptive conjugate gradient algorithm for perceptron training. Neurocomputing, 69, 368–386.CrossRefGoogle Scholar
  37. 37.
    Perantonis, S. J., & Virvilis, V. (2000). Efficient perceptron learning using constrained steepest descent. Neural Networks, 13(3), 351–364.Google Scholar
  38. 38.
    Rosenblatt, R. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386–408.CrossRefMathSciNetGoogle Scholar
  39. 39.
    Rosenblatt, R. (1962). Principles of neurodynamics. New York: Spartan Books.MATHGoogle Scholar
  40. 40.
    Rowcliffe, P., Feng, J., & Buxton, H. (2006). Spiking perceptrons. IEEE Transactions on Neural Networks, 17(3), 803–807.CrossRefGoogle Scholar
  41. 41.
    Shalev-Shwartz, S., & Singer, Y. (2005). A new perspective on an old perceptron algorithm. In Proceedings of the 16th Annual Conference on Computational Learning Theory (pp. 264–278).Google Scholar
  42. 42.
    Sima, J. (2002). Training a single sigmoidal neuron Is hard. Neural Computation, 14, 2709–2728.Google Scholar
  43. 43.
    Vallet, F. (1989). The Hebb rule for learning linearly separable Boolean functions: Learning and generalisation. Europhysics Letters, 8(8), 747–751.CrossRefGoogle Scholar
  44. 44.
    Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.CrossRefGoogle Scholar
  45. 45.
    Widrow, B., & Hoff, M. E. (1960). Adaptive switching circuits. Record of IRE Eastern Electronic Show & Convention (WESCON) (Vol. 4, pp. 96–104).Google Scholar
  46. 46.
    Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, Madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415–1442.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  1. 1.Enjoyor LabsEnjoyor Inc.HangzhouChina
  2. 2.Department of Electrical and Computer EngineeringConcordia UniversityMontrealCanada

Personalised recommendations