Advertisement

Refining first order theories with neural networks

  • M. Botta
  • A. Giordana
  • R. Piola
Communications Session 1A Logic for AI
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1325)

Abstract

This paper presents the experimental evaluation of a neural network architecture that can manage structured data and refine knowledge bases expressed in a first order logic language.

This new framework is well suited to classification problems in which concept descriptions depend upon numerical features of the data and data have variable size. In fact, the main goal of the neural architecture is that of refining the numerical part of the knowledge base, without changing its structure.

Several experiments are described in the paper in order to evaluate the potential benefits with respect to the more classical architectures based on the propositional framework. In a first case a classification theory has been manually handcrafted and then refined automatically. In a second case it has been automatically acquired by a symbolic relational learning system able to deal with numerical features. An extensive experimentation ha been also done with most popular propositional learners showing that the new network architecture converges quite fastly and generalizes better than all of them.

Keywords

Learning and Knowledge Discovery Soft Computing First Order Logic Connectionist learning Theory Refinement 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Baroglio, A. Giordana, M. Kaiser, M. Nuttin, and R. Piola. Learning controllers for industrial robots. Machine Learning, 23:221–250, July 1996.Google Scholar
  2. 2.
    H.R. Berenji. Fuzzy logic controllers. In R.R. Yager and L.A. Zadeh, editors, An Introduction to Fuzzy Logic Applications in Intelligent Systems, pages 69–96. Kluwer Academic Publishers, 1992.Google Scholar
  3. 3.
    H.R. Berenji and P. Khedkar. Learning and tuning fuzzy controllers through reinforcements. IEEE Transactions on Neural Networks, 3(5):724–740, September 1992.Google Scholar
  4. 4.
    E.B. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36:929–965, 1989.Google Scholar
  5. 5.
    M. Botta, A. Giordana, and R. Piola. FONN: Combining first order logic with connectionist learning. In Proceedings of the 14th International Conference on Machine Learning ICML-97, Nashville, TN, July 1997. Morgan Kaufmann.Google Scholar
  6. 6.
    L. Breiman, J.H. Friedman, R.A. Ohlsen, and C.J. Stone. Classification And Regression Trees. Wadsworth & Brooks, Pacific Grove, CA, 1984.Google Scholar
  7. 7.
    S.E. Fahlman and C. Lebiere. The cascade-correlation learning architecture. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2. Morgan Kaufmann, 1990.Google Scholar
  8. 8.
    L.M. Fu. Knowledge-based connectionism for revising domain theories. IEEE Transactions on Systems, Man and Cybernetics, 23(1):173–182, January 1993.Google Scholar
  9. 9.
    J.J. Mahoney and R.J. Mooney. Comparing methods for refining certainity-factor rule-bases. In Proc. of the Eleventh Internetional Workshop on Machine Learning ML-94, Rutgers University, NJ, July 1994.Google Scholar
  10. 10.
    R. Michalski. A theory and methodology of inductive learning. In R. Michalski, J. Carbonell, and T. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, pages 83–134, Los Altos, CA, 1983. Morgan Kaufmann.Google Scholar
  11. 11.
    T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78(9):1481–1497, September 1990.Google Scholar
  12. 12.
    R.J. Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986.Google Scholar
  13. 13.
    D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Parts I & II. MIT Press, Cambridge, Massachusetts, 1986.Google Scholar
  14. 14.
    G. Towell and J. W. Shavlik. Knowledge based artificial neural networks. Artf cial Intelligence, 70(4):119–166, 1994.Google Scholar
  15. 15.
    G.G. Towell, J.W. Shavlik, and M.O. Noordwier. Refinement of approximate domain theories by knowledge-based neural networks. In AAAI'90, pages 861–866, 1990.Google Scholar
  16. 16.
    V. Tresp, J. Hollatz, and S. Ahmad. Network structuring and training using rule-based knowledge. In Advances in Neural Information Processing Systems 5 (NIPS5), 1993.Google Scholar
  17. 17.
    L.A. Zadeh. Knowledge representation in fuzzy logic. In R.R. Yager and L.A. Zadeh, editors, An Introduction to Fuzzy Logic Applications in Intelligent Systems, pages 1–25. Kluwer Academic Publishers, 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • M. Botta
    • 1
  • A. Giordana
    • 1
  • R. Piola
    • 1
  1. 1.Dipartimento di InformaticaUniversitá di TorinoTorinoItaly

Personalised recommendations