Advertisement

Refining numerical terms in horn clauses

  • Marco Botta
  • Attilio Giordana
  • Roberto Piola
Machine Learning 1
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1321)

Abstract

This paper presents an experimental analysis of a method recently proposed for refining knowledge bases expressed in a first order logic language. The method consists in transforming a classification theory into a neural network, called First Order logic Neural Network (FONN), by replacing predicate semantic functions and logical connectives with continuous-valued derivable functions. In this way it is possible to tune numerical constants in the original theory by performing the error gradient descent. The classification theory to be refined can be manually handcrafted or automatically acquired by a symbolic relational learning system able to deal with numerical features. The emphasis of this paper is on the experimental analysis of the method and an extensive experimentation is provided considering different choices for encoding the logical connectives, and different variants of the learning strategy. The experimentation is made on a challenging artificial case study and shows that FONNs converge quite fastly and generalize better than propositional learners do on an equivalent task definition.

Keywords

Classification Theory Radial Basis Function Network Fuzzy Logic Controller First Order Logic Horn Clause 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Baroglio, A. Giordana, M. Kaiser, M. Nuttin, and R. Piola. Learning controllers for industrial robots. Machine Learning, 23:221–250, July 1996.Google Scholar
  2. 2.
    H.R. Berenji. Fuzzy logic controllers. In R.R. Yager and L.A. Zadeh, editors, An Introduction to Fuzzy Logic Applications in Intelligent Systems, pages 69–96. Kluwer, 1992.Google Scholar
  3. 3.
    M. Botta and A. Giordana. SMART+: A multi-strategy learning tool. In IJCAI-93, Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 937–943, Chambéry, France, 1993.Google Scholar
  4. 4.
    M. Botta, A. Giordana, and R. Piola. FONN: Combining first order logic with connectionist learning. In Proceedings of the 14 th International Conference on Machine Learning ICML-97, Nashville, TN, July 1997. Morgan Kaufmann.Google Scholar
  5. 5.
    L. Breiman, J.H. Friedman, R.A. Ohlsen, and C.J. Stone. Classification And Regression Trees. Wadsworth & Brooks, Pacific Grove, CA, 1984.Google Scholar
  6. 6.
    P. Frasconi, M. Gori, M. Maggini, and G. Soda. Representation of finite state automata in recurrent radial basis function networks. Machine Learning, 23:5–32, 1996.Google Scholar
  7. 7.
    L.M. Fu. Knowledge-based connectionism for revising domain theories. IEEE Transactions on Systems, Man and Cybernetics, 23(1):173–182, January 1993.Google Scholar
  8. 8.
    D. Haussler. Learning conjunctive concepts in structural domains. Machine Learning, 4:70–40, 1989.Google Scholar
  9. 9.
    R. Maclin and J.W. Shavlik. Using knowledge-based neural networks to improve algorithms: refining the Chou-Fasman algorithm for protein folding. Machine Learning, 11:195–215, 1993.Google Scholar
  10. 10.
    R. Michalski. A theory and methodology of inductive learning. In R. Michalski, J. Carbonell, and T. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, pages 83–134, Los Altos, CA, 1983. Morgan Kaufmann.Google Scholar
  11. 11.
    C.W. Omlin and C.L. Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 43(6):937–972, 1996.Google Scholar
  12. 12.
    T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78(9):1481–1497, September 1990.Google Scholar
  13. 13.
    R.J. Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986.Google Scholar
  14. 14.
    D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Parts I & II MIT Press, Cambridge, Massachusetts, 1986.Google Scholar
  15. 15.
    G. Towell and J.W. Shavlik. Knowledge based artificial neural networks. Artficial Intelligence, 70(4):119–166, 1994.Google Scholar
  16. 16.
    V. Tresp, J. Hollatz, and S. Ahmad. Network structuring and training using rule-based knowledge. In S.J. Hanson, J.D. Cowan, and C.L. Giles, editors, Advances in Neural Information Processing Systems 5 (NIPS-5), pages 871–878, San Mateo, CA, 1993. Morgan Kaufmann.Google Scholar
  17. 17.
    L.A. Zadeh. Knowledge representation in fuzzy logic. In R.R. Yager and L.A. Zadeh, editors, An Introduction to Fuzzy Logic Applications in Intelligent Systems, pages 1–25. Kluwer Academic Publishers, 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Marco Botta
    • 1
  • Attilio Giordana
    • 1
  • Roberto Piola
    • 1
  1. 1.Dipartimento di InformaticaUniversità di TorinoTorino (TO)Italy

Personalised recommendations