Skip to main content

Extension of Binary Neural Networks for Multi-class Output and Finite Automata

  • Chapter
Neural Information Processing: Research and Development

Part of the book series: Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 152))

Abstract

Neural networks implementing Boolean functions are known as Binary Neural Networks (BNNs). Various methods for construction of BNNs have been introduced in the last decade. Many applications require BNNs to handle multiple classes. In this paper, we first review some basic methods proposed in the last decade for construction of BNNs. We summarize main approach in these methods, by observing that a neuron can be visualized in terms of its equivalent hypersphere. Next, we give some approaches for adopting a BNN construction process for classification problem that needs to classify data into multiple (more than two) classes. We illustrate these approaches using examples. From the theoretical view, the limited applicability of BNNs does not come in the way of expressing a Finite Automaton (FA) in terms of recurrent BNNs. We prove that recurrent BNNs simulate any deterministic as well as non-deterministic finite automaton. The proof is constructive, and the construction process is illustrated by suitable examples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Marvin L. Minsky, and Seymour A. Papert, Perceptrons, (Expanded Edition, 1988; first Edition 1969 ), Cambridge, MA: The MIT Press (1988).

    MATH  Google Scholar 

  2. Hava T. Seigelmann, Neural Networks and Analog Comp::’ztion: Beyond the Turing Limit, Boston: Birkhauser (1999).

    Google Scholar 

  3. Terry Windeatt, Reza Ghaderi, “Binary labeling and Decision-level fusion”, Information Fusion, Vol. 2, pp. 103–112 (2001).

    Article  Google Scholar 

  4. C. H. Chu and J. H. Kim, “Pattern Classification by Geometrical Learning of Binary Neural Networks”, Proceedings of International Joint Conference on Neural Networks, Nagoya, Japan, (Oct. 1993).

    Google Scholar 

  5. Jung H. Kim, Byungwoon Ham, and Sung-Kwon Park, “The Learning of Multi-output Binary Neural Networks for Handwriting Digit Reorganization,” Proceedings of International Joint Conference of Neural Networks (IJCNN), Vol.’, pp. 605–508, (Oct. 1993).

    Google Scholar 

  6. T. Windeatt, and R. Tebbs, “Spectral technique for hidden layer neural network learning,” Pattern Recognition Letters, Vol. 18, No. 8, pp. 723–731, (1997).

    Article  Google Scholar 

  7. Vinay Deolalikar, “Mapping Boolean functions with neural networks with binary weights and zero thresholds,” IEEE Transactions on Neural Networks, Vol. 12, No. 1, pp. 1–8, (July 2001).

    Article  Google Scholar 

  8. Vinay Deolalikar, “A two-layer paradigm capable of forming arbitrary decision regions in input space,” IEEE Transactions on Neural Networks, Vol. 13, No. 1, pp. 15–21, (Jan. 2002).

    Article  Google Scholar 

  9. Igor N. Aizenberg, Naum N. Aizenberg, and Georgy A. Krivosheev, “Multilayered and universal binary neurons: Learning algorithms, applications to image processing and recognition,” In, Lecture Notes in Artificial Intelligence, Berlin: Springer-Verlag, Vol. 1715: Machine Learning an Data Mining in Pattern Recognition —Proceedings of the First International Workshop MLDM’99, Leipzig, Germany, (Sept. 1999).

    Google Scholar 

  10. Milel Forcada, Rafael C Carrasco, “Finite-state computation in analog neural networks: steps towards biologically plausible models?” In, Lecture Notes in Artificial Intelligence, Berlin: Springer-Verlag, Vol. 2036: Emergent Neural Computational Models Based on Neuroscience, pp. 482–486, (2001).

    Google Scholar 

  11. Stephan Mertens, Andreas Engel, “ Vapnic—Chervonenkis dimension of neural networks with binary weights”, Physical Review E, Vol. 55, No. 4, (April 1997).

    Google Scholar 

  12. Jeong Han Kim, James R.Roche, “Covering Cubes by Random Half Cubes, with Applications to Binary Neural Networks”, Journal of Computer and System Science (JCSS) Vol. 56, pp. 223–252, (1998).

    Article  MATH  Google Scholar 

  13. C.L. Giles, C.B.Miller, D.Chen, H.H.Chen, G.Z., and Y.C., “Learning and extracting finite state automata with second order recurrent networks”, Neural Computation, Vol. 2: 331–402, (1992).

    Google Scholar 

  14. C. Omlin and C.L. Giles, “Constructing Deterministic Finite State Automata in recurrent neural networks”, Journal of the Association of Computing Machinery (JACM), Vol. 45, No. 6, pp. 937–972, (1996).

    Article  MathSciNet  Google Scholar 

  15. C.L. Giles, C.B.Miller, D.Chen, H.H.Chen, G.Z.Sun, and Y.C.Lee, “Extracting and learning an unknown grammar with recurrent neural networks”, Advances in Neural Information Processing Systems, Vol. 4, pp. 317–324, (1992).

    Google Scholar 

  16. M. L. Forcada and R. C. Carrasco, “ Learning the initial state of a second order recurrent neural network during regular language inference”, Neural Computation Vol. 7, pp. 1075–1082 (1995).

    Article  Google Scholar 

  17. N. Alon, A. Dewdney, and T. Ott, “Efficient simulation of finite automata by neural nets”, Journal of the Association for Computing Machinery, Vol. 38, no. 2, pp. 495–514 (April 1991).

    Article  MathSciNet  MATH  Google Scholar 

  18. P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Unified integration of explicity and learning by example in recurrent networks,” IEEE Transactions on Knowledge and Data Engineering(TKDE), Vol. 7, no. 2, pp. 340–346, (1995).

    Article  Google Scholar 

  19. P. Frasconi, M. Gori and G. Soda, “Injecting nondeterministic finite state automata into recurrent networks”, Tech. Rep., Dipartimentc di Sistemi e Informatica, University di Firenze, Italy, Florence, Italy, (1993).

    Google Scholar 

  20. J. Pollack, “The induction of dynamical recognizers”, Morhine Learning, Vol. 7, pp.227–252, (1991).

    Google Scholar 

  21. R. Watrous and G. Kuhn, “Induction of finite state languages using second order recurrent networks”, Neural Computation, Vol. 4, no. 3, p. 406, (1992).

    Article  Google Scholar 

  22. Z. Zeng, R. Goodman, and P. Smyth, “Learning finite state machines with self-clustering recurrent networks”, Neural Computation, Vol. 5, No. 6, pp. 976–990, (1993).

    Article  Google Scholar 

  23. J. Elman,“Finding structure in time”, Cognitive Science, Vol. 14, pp. 179–211, (1990).

    Article  Google Scholar 

  24. C. Giles and C. Omlin, “Rule refinement with recurrent neural networks”, In Proceedings IEEE International Conference on Neural Networks (ICNN’93), Vol.II, pp. 801–806, 1993.

    Chapter  Google Scholar 

  25. Mark Steijvers and Peter Grunwald, “A recurrent network that performs a context-sensitive prediction task”, Technical Report in ESPRIT working group (NeuroCOLT), (1996).

    Google Scholar 

  26. Rafael C. Carrasco and Mikel L. Forcada, “Second order Recurrent Neural Networks can learn Regular Grammars from Noisy strings”, In, Proceedings IWANN — International Workshop On Artificial Neural Networks, pp. 605–610 (1995).

    Google Scholar 

  27. D.L. Gray, and, A.N. Michel, “A training algorithm for binary feedforward neural networks,”, IEEE Trans. Nerual Networks, Vol. 3, No. 2, IEEE, USA, pp. 176–194 (Mar. 1992).

    Google Scholar 

  28. N.N. Biswas, and R. Kumar, “A new algorithm for learning representations in Boolean neural networks,” Current Science, Vol. 59, No. 12, pp. 595–600, (June, 1990 ).

    Google Scholar 

  29. S. Gazula, and M. R. Kabuka, “Design of suaervised classifiers using Boolean neural networks,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 17, No. 12, IEEE, USA, pp. 1239–1246, (Dec. 1995).

    Google Scholar 

  30. M. R. Kabuka, “Comments on ”Design of supervised classifiers using Boolean neural networks“, ” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 21, No. 9, IEEE, USA, pp. 957–958, (Sept. 1999).

    Google Scholar 

  31. N. S. V. Rao, E.M. Oblow, C.W. Glover, “Learning separations by Boolean combinations of half-spaces,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 16, No. 7, IEEE, USA, pp. 765–768, (July 1994).

    Google Scholar 

  32. M. Xiaomin, Yang Yixian, and Z. Zhang, “Research on the learning algorithm of binary neural network,” Chinese Journal of Computers (China), Vol. 22, No. 9, pp. 931–935, (Sept. 1999).

    MathSciNet  Google Scholar 

  33. Atsushi Yamamoto, Toshimichi Saito, “An improved Expand-and-Truncate Learning,” Proc. of IEEE International Conference on Neural Networks (ICNN), Vol. 2, pp. 11111116, (June, 1997 ).

    Google Scholar 

  34. Ma Xiaomin, Yang Yixian, Zhang Zhaozhi, “Constructive Learning of Binary Neural Networks and Its Application to Nonlinear Register Synthesis”, Proc. of International Conference on Neural Information Processing (ICONIP)’0i, Vol. 1, pp. 90–95, Shanghai (China), (Nov. 14–18, 2001 ).

    Google Scholar 

  35. J.H. Kim, and S-K. Park, “The geometric learning of binary neural networks,” IEEE Trans. Neural Networks, Vol. 6, No. 1, pp. 237–247, (January, 1995 ).

    Google Scholar 

  36. Sang-Kyu Sung, Jong Won Jung, Joon-Tark Lee and Woo-Jin Choi, “Opitonal Synthedid Method for Binary Neural Network Using NETLA”, Lecture Notes in Artificial Intelligence (LNAI), Vol. 2275, pp. 236–244, 2002.

    Google Scholar 

  37. Bernd Steinbach and Roman Kohut, “Neural Networks-a A model of Boolean Functions”, Proceeding of 5th International Workshops on Boolean Problems ( Freiberg, Germany ), 2002.

    Google Scholar 

  38. J. A. Starzyk and J. Pang, “Evolvable Binary Artificial Neural Network for Data Classification”, Proceedings of The 2000 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’2000), Monte Carlo Resort, Las Vegas, Nevada ( USA ), ( June 26–29, 2000 ).

    Google Scholar 

  39. Narendra S. Chaudhari, Aruna Tiwari, “Extending ETL for multi-class output,” International Conference on Neural Information Processing, 2002 (ICONIP ‘02), In, Proceedings: Computational Intelligence for E-Age, Asia Pacific Neura! \Network Association (APNNA), pp. 1777–1780, Singapore, (18–22 Nov, 2002 ).

    Google Scholar 

  40. Di Wang, Narendra S. Chaudhari, “A Multi-Core Learning Algorithm for Binary Neural Networks”, In Proceedings of the International Joint Conference on Neural Networks (IJCNN ‘03) Vol. 1 pp. 450–455, Portland, USA, ( 20–24 July 2003 ).

    Google Scholar 

  41. Di Wang and Narendra S. Chaudhari, “Binary Neural Network Training Algorithms Based On Linear Sequential Learning,” International Journal of Neural Systems (IJNS), 13 (5) pp. 1–19, (Oct. 2003).

    Google Scholar 

  42. Narendra S. Chaudhari, and Di Wang, “A Novel Boolean Self-Organization Mapping Based on Fuzzy Geometrical Expansion”, In, Proceedings, Fourth International Conference on Information, Communication and Signal Processing Fourth IEEE Pacific Rim Conference on Multimedia (ICICS-PCM-03),Singapore, (16–18 Dec. 2003).

    Google Scholar 

  43. John E.. Hoperaft, Rajeev Motwani, and Jeffrey D. Ullman, “Introduction to Automata Theory, Languages and Computation”, ( Second Edition) Addison-Wesley Longman Inc, (2001).

    Google Scholar 

  44. Marvin L. Minsky, “Computation: Finite and Infinite Machines”, Englewood Cliffs, NJ: Prentice Hall, Inc. Chapter 3, pp. 32–68, (1967).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Chaudhari, N.S., Tiwari, A. (2004). Extension of Binary Neural Networks for Multi-class Output and Finite Automata. In: Rajapakse, J.C., Wang, L. (eds) Neural Information Processing: Research and Development. Studies in Fuzziness and Soft Computing, vol 152. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39935-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39935-3_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-53564-2

  • Online ISBN: 978-3-540-39935-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics