Skip to main content

Verifying Properties of Neural Networks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2084))

Abstract

In the beginning of nineties, Hava Siegelmann proposed a new computational model, the Artificial Recurrent Neural Network (ARNN), and proved that it could perform hypercomputation. She also established the equivalence between the ARNN and other analog systems that support hypercomputation, launching the foundations of an alternative computational theory. In this paper we contribute to this alternative theory by exploring the use of formal methods in the verification of temporal properties of ARNNs. Based on the work of Bradfield in verification of temporal properties of infinite systems, we simplify his tableau system, keeping its expressive power, and show that it is suitable to the verification of temporal properties of ARNNs.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bradfield, J.: Verifying Temporal Properties of Systems with Applications to Petri Nets, PhD thesis, University of Edinburgh (1991)

    Google Scholar 

  2. Bradfield, J.: Proving Temporal Properties of Petri Nets. In Rozenberg, G. (eds.): Advances in Petri Nets 1991. Lecture Notes in Computer Science, Vol. 524. Springer-Verlag, Berlin (1991) 29–47

    Chapter  Google Scholar 

  3. Bradfield, J., Stirling, C.:,Local Model Checking for Infinite State Spaces. Theoretical Computer Science, Vol. 96. (1992) 157–174

    Article  MATH  MathSciNet  Google Scholar 

  4. Emerson, E. A., Lei, C.-L.: Efficient Model Checking in Fragments of the Propositional Mucalculus. Proceedings of the 1st IEEE Symposium on Logic in Computer Science, (1986) 267–278

    Google Scholar 

  5. Gilles, C., Miller, C., Chen, D., Chen, H., Sun, G., Lee, Y.: Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks. Neural Computation, Vol. 4 3 (1992) 393–405

    Article  Google Scholar 

  6. Gruau, F., Ratajszczak, J., Wiber, G.: A neural compiler. Theoretical Computer Science, Vol. 141(1-2) (1995) 1–52

    Article  MATH  Google Scholar 

  7. Neto, J. P., Siegelmann, H. T., Costa, J. F.: On the Implementation of Programming Languages with Neural Nets. In First International Conference on Computing Anticipatory Systems (CASYS’97). CHAOS, 1 (1998) 201–208

    Google Scholar 

  8. Neto, J. P., Siegelmann, H. T., Costa, J. F.: Building Neural Net Software. submitted, (1999)

    Google Scholar 

  9. Pollack, J.: On Connectionism Models of Natural Language Processing. PhD thesis. University of Illinois, Urbana (1987)

    Google Scholar 

  10. Siegelmann, H. T.: On NIL: the Software Constructor of Neural Networks. Parallel Processing Letters. Vol. 6 4, World Scientific Publishing Company (1996) 575–582

    Article  Google Scholar 

  11. Siegelmann, H. T.: Neural Networks and Analog Computation: beyond the Turing limit. Birkhäuser, Boston (1999)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rodrigues, P., Costa, J.F., Siegelmann, H.T. (2001). Verifying Properties of Neural Networks. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_19

Download citation

  • DOI: https://doi.org/10.1007/3-540-45720-8_19

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42235-8

  • Online ISBN: 978-3-540-45720-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics