Advertisement

Background knowledge and declarative bias in inductive concept learning

  • Nada Lavrač
  • Sašo Džeroski
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 642)

Abstract

There are two main limitations of classical inductive learning algorithms: the limited capability of taking into account the available background knowledge and the use of limited knowledge representation formalisms based on propositional logic. The paper presents a method for using background knowledge effectively in learning both attribute and relational descriptions. The method, implemented in the system LINUS, uses propositional learners in a more expressive logic programming framework. This allows for learning of logic programs in the form of constrained deductive hierarchical database clauses. The paper discusses the language bias imposed by the method and shows how a more expressive language of determinate logic programs can be used within the same framework.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    H. Ade and M. Bruynooghe. A comparative study of declarative and dynamically adjustable language bias in concept learning. In Proc. Workshop on Logical Approaches to Machine Learning, Tenth European Conference on Artificial Intelligence, Vienna, Austria, 1992. To appear.Google Scholar
  2. 2.
    F. Bergadano, A. Giordana and S. Ponsero. Deduction in top-down inductive learning. In Proc. Sixth International Workshop on Machine Learning, pages 23–25, Morgan Kaufmann, San Mateo, CA, 1989.Google Scholar
  3. 3.
    I. Bratko, I. Mozetič and N. Lavrač. KARDIO: a study in deep and qualitative knowledge for expert systems, MIT Press, Cambridge, MA, 1989.Google Scholar
  4. 4.
    I. Bratko, S.H. Muggleton and A. Varšek. Learning qualitative models of dynamic systems. In S. H. Muggleton, editor, Inductive Logic Programming, Academic Press, London, 1992. In press.Google Scholar
  5. 5.
    M. Bruynooghe and L. De Raedt. Technical annex of the ESPRIT BRA 6020 Inductive Logic Programming.Google Scholar
  6. 6.
    B. Cestnik, I. Kononenko and I. Bratko. ASSISTANT 86: A knowledge elicitation tool for sophisticated users. In I. Bratko and N. Lavrač, editors, Progress in Machine Learning, pages 31–45, Sigma Press, Wilmslow, 1987.Google Scholar
  7. 7.
    P. Clark and T. Niblett. The CN2 induction algorithm. Machine Learning, 3(4): 261–283, 1989.Google Scholar
  8. 8.
    L. De Raedt and M. Bruynooghe. Indirect relevance and bias in inductive concept-learning. Knowledge Acquisition, 2: 365–390, 1990.Google Scholar
  9. 9.
    L. De Raedt and M. Bruynooghe. Interactive concept learning and constructive induction by analogy. Machine Learning, 8(2): 107–150, 1992.Google Scholar
  10. 10.
    L. De Raedt, M. Bruynooghe and B. Martens. Integrity constraints in interactive concept learning. In Proc. Eighth International Workshop on Machine Learning, pages 394–398, Morgan Kaufmann, San Mateo, CA, 1991.Google Scholar
  11. 11.
    S. Džeroski. Handling noise in inductive logic programming. MSc Thesis, Faculty of Electrical Engineering and Computer Science, University of Ljubljana, Slovenia, 1991.Google Scholar
  12. 12.
    S. Džeroski and N. Lavrač. Learning relations from noisy examples: an empirical comparison of LINUS and FOIL. In Proc. Eighth International Workshop on Machine Learning, pages 399–402, Morgan Kaufmann, San Mateo, CA, 1991.Google Scholar
  13. 13.
    S. Džeroski and N. Lavrač. Refinement graphs for FOIL and LINUS. In S.H. Muggleton, editor, Inductive Logic Programming, Academic Press, London, 1992. In press.Google Scholar
  14. 14.
    S. Džeroski, S. Muggleton and S. Russell. PAC-learnability of determinate logic programs. In Proc. Fifth ACM Workshop on Computational Learning Theory, Pittsburgh, PA, 1992. To appear.Google Scholar
  15. 15.
    S. Džeroski, S. Muggleton and S. Russell. PAC-learnability of constrained nonrecursive logic programs. Submitted for publication.Google Scholar
  16. 16.
    L. M. Fu and B. G. Buchanan. Learning intermediate concepts in constructing a hierarchical knowledge base. In Proc. Ninth International Joint Conference on Artificial Intelligence, pages 659–666, Morgan Kaufmann, Los Altos, CA, 1985.Google Scholar
  17. 17.
    J.U. Kietz and S. Wrobel. Controlling the complexity of learning in logic through syntactic and task-oriented models. In S.H. Muggleton, editor, Inductive Logic Programming, Academic Press, London, 1992. In press.Google Scholar
  18. 18.
    R. Kowalski. Logic for problem solving, North Holland, New York, 1979.Google Scholar
  19. 19.
    N. Lavrač and S. Džeroski. Inductive learning of relations from noisy examples. In S.H. Muggleton, editor, Inductive Logic Programming, Academic Press, London, 1992. In press.Google Scholar
  20. 20.
    N. Lavrač, S. Džeroski and M. Grobelnik. Learning nonrecursive definitions of relations with LINUS. In Proc. Fifth European Working Session on Learning, pages 265–281, Springer, Berlin, 1991.Google Scholar
  21. 21.
    N. Lavrač, S. Džeroski, V. Pirnat and V. Križman. Learning rules for early diagnosis of rheumatic diseases. In Proc. Third Scandinavian Conference on Artificial Intelligence, pages 138–149, IOS Press, Amsterdam, 1991.Google Scholar
  22. 22.
    M. Li and P. Vitányi. Learning simple concepts under simple distributions. SIAM Journal of Computing, 20(5): 911–935, 1991.Google Scholar
  23. 23.
    J.W. Lloyd. Foundations of Logic Programming (2nd edn), Springer, Berlin, 1987.Google Scholar
  24. 24.
    R. S. Michalski. Discovering classification rules using variable-valued logic system VL1. In Proc. Third International Joint Conference on Artificial Intelligence, pages 162–172, Stanford Research Institute, Menlo Park, CA, 1973.Google Scholar
  25. 25.
    R.S. Michalski. Pattern recognition as rule-guided inductive inference. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(4): 349–361, 1980.Google Scholar
  26. 26.
    R.S. Michalski. A theory and methodology of inductive learning. In R.S. Michalski, J.G. Carbonell and T.M. Mitchell, editors, Machine learning: an artificial intelligence approach, Vol. 1, pages 83–134, Tioga, Palo Alto, CA, 1983.Google Scholar
  27. 27.
    R.S. Michalski, I. Mozetič, J. Hong and N. Lavrač. The multi-purpose incremental learning system AQ15 and its testing application on three medical domains. In Proc. National Conference on Artificial Intelligence, pages 1041–1045, Morgan Kaufmann, San Mateo, CA, 1986.Google Scholar
  28. 28.
    I. Mozetič. NEWGEM: Program for learning from examples, technical documentation and user's guide. Reports of Intelligent Systems Group, No. UIUCDCS-F-85-949, Department of Computer Science, University of Illinois at Urbana Champaign, 1985. Also Technical Report IJS-DP-4390, Jožef Stefan Institute, Ljubljana, Slovenia.Google Scholar
  29. 29.
    I. Mozetič. Learning of qualitative models. In I. Bratko and N. Lavrač, editors, Progress in Machine Learning, pages 201–217, Sigma Press, Wilmslow, 1987.Google Scholar
  30. 30.
    I. Mozetič and N. Lavrač. Incremental learning from examples in a logic-based formalism. In P. Brazdil, editor, Proc. Workshop on Machine Learning, Meta-Reasoning and Logics, pages 109–127, Sesimbra, Portugal, 1988.Google Scholar
  31. 31.
    S.H. Muggleton. Duce, an oracle-based approach to constructive induction. In Proc. Tenth International Joint Conference on Artificial Intelligence, pages 287–292, Morgan Kaufmann, San Mateo, CA, 1989.Google Scholar
  32. 32.
    S.H. Muggleton. Inductive logic programming. New Generation Computing, 8(4): 295–318, 1991.Google Scholar
  33. 33.
    S.H. Muggleton and W. Buntine. Machine invention of first-order predicates. In Proc. Fifth International Conference on Machine Learning, pages 339–352, Morgan Kaufmann, San Mateo, CA, 1988.Google Scholar
  34. 34.
    S.H. Muggleton and C. Feng. Efficient induction of logic programs. In Proc. First Conference on Algorithmic Learning Theory, pages 368–381, Ohmsha, Tokyo, 1990.Google Scholar
  35. 35.
    M. Nunez. The use of background knowledge in decision tree induction. Machine learning, 6(3): 231–250, 1991.Google Scholar
  36. 36.
    G. Pagallo and D. Haussler. Boolean feature discovery in empirical learning. Machine Learning 5(1): 71–99, 1990.Google Scholar
  37. 37.
    J.R. Quinlan. Induction of decision trees. Machine Learning, 1(1): 81–106, 1986.Google Scholar
  38. 38.
    J.R. Quinlan. Learning logical definitions from relations. Machine Learning, 5(3): 239–266, 1990.Google Scholar
  39. 39.
    J.R. Quinlan. Knowledge acquisition from structured data — using determinate literals to assist search. IEEE Expert 6(6): 32–37, 1991.Google Scholar
  40. 40.
    S. Russell. The use of knowledge in analogy and induction, Pitman, London, 1989.Google Scholar
  41. 41.
    E.Y. Shapiro. Algorithmic Program Debugging, MIT Press, Cambridge, MA, 1983.Google Scholar
  42. 42.
    J.D. Ullman. Principles of database and knowledge base systems (Volume I), Computer Science Press, Rockville, MA, 1988.Google Scholar
  43. 43.
    P.E. Utgoff and T. M. Mitchell. Acquisition of appropriate bias for inductive concept learning. In Proc. National Conference on Artificial Intelligence, pages 414–417, Kaufmann, Los Altos, CA, 1982.Google Scholar
  44. 44.
    J. Wnek, J. Sarma, A. A. Wahab, and R. S. Michalski. Comparing learning paradigms via diagrammatic visualization: A case study in single concept learning using symbolic, neural net and genetic algorithm methods. In Proc. Fifth International Symposium on Methodologies for Intelligent Systems, Knoxville, TN, 1990.Google Scholar
  45. 45.
    S. Wrobel. Automatic representation adjustment in an observational discovery system. In Proc. Third European Working Session on Learning, pages 253–262, Pitmann, London, 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Nada Lavrač
    • 1
  • Sašo Džeroski
    • 1
  1. 1.Jožef Stefan InstituteLjubljanaSlovenia

Personalised recommendations