Learning relational concepts at different levels of granularity

  • G. Armano
  • G. Fumera
Machine Learning 2
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1321)


In this paper, an alternative approach to the induction of relational concepts is presented. The underlying framework relies on the concept of exception, an exception being a counterexample left within the scope of a description devoted to classifying examples of the given target concept. While trying to characterize the target concept, first an initial description is searched for. Such a solution must be complete, although not necessarily consistent. This means that some counterexamples are allowed to be misclassified. As counterexamples (i.e., exceptions) must be taken into account in order to properly classify them, the corresponding learning process is performed in several steps, each step devoted to coping with exceptions generated during the previous one. Eventually, the process comes to an end, usually leading to a description that uses a kind of Vere's counterfactuals to refine, at different levels of granularity, the underlying concept.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Cam93]
    Cameron-Jones, R.M., and Quinlan, J.R., “Efficient top-down induction of logic programs,” SIGART, 5, 33–42, 1993.Google Scholar
  2. [Esp93]
    Esposito, F., Malerba, D., Semeraro, G., and Pazzani M.J., “A Machine Learning Approach to Document Understanding,“ Proc. 2nd Int. workshop on Machine Learning, Harpers Ferry, WV, pp. 276–292, May 1993.Google Scholar
  3. [Gem91]
    Gemello, R., Mana, F., and Saitta, L., “Rigel: An Inductive Learning System” Machine Learning, Vol. 6, pp. 7–35, 1991.Google Scholar
  4. [Mic80]
    Michalski, R.S., “Pattern Recognition as Rule-Guided Inductive Inference,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 2, pp. 349–361, 1980.Google Scholar
  5. [Mic83]
    Michalski, R.S., “A Theory and Methodology of Inductive Learning,” Artificial Intelligence, Vol. 20, pp. 111–161, 1983.Google Scholar
  6. [Mug92]
    Muggleton, S., and feng, C., “Efficient Induction of Logic Programs,” in S. Muggleton (Ed.), Inductive Logic Programming, pp. 281–298, London, Academic Press, 1992.Google Scholar
  7. [Paz91]
    Pazzani, M.J., Brunk, C.A., and Silverstein, G., “A Knowledge Intensive Approach to Learning Relational Concepts,” Proc. 8th International Workshop on Machine Learning, Evanston, Illinois, pp. 432–436, 1991.Google Scholar
  8. [Qui89]
    Quinlan, R., “Learning relations: Comparison of a symbolic and a connectionist approach,” Technical Report 346, Basser Dept. Comp. Science, University of Sydney, Sydney, Australia, 1989.Google Scholar
  9. [Qui95]
    Quinlan J.R., and Cameron-Jones, R.M., “Induction of Logic Programs: FOIL and Related Systems,” New Generation Computing, Vol. 13, pp. 287–312, 1995.Google Scholar
  10. [Tur95]
    Turney, P., “Low Size-Complexity Inductive Logic Programming: The East-West Challenge Considered as a Problem in Cost-Sensitive Classification,” Proc. of the Fifth International Inductive Logic Programming Workshop, ILP-95, pp. 247–263, 1995.Google Scholar
  11. [Ver80]
    Vere, S.A., “Multilevel Counterfactuals for Generalizations of Relational Concepts and Productions,” Artificial Intelligence, Vol. 14, pp. 139–164, 1980.Google Scholar
  12. [Win86]
    Winston, P.H., “Learning by Augmenting Rules and Accumulating Censors,” in Michalski, Carbonell, Mitchell (eds.), Machine Learning: An AI Approach, Vol. 2, 1986.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • G. Armano
    • 1
  • G. Fumera
    • 1
  1. 1.DIEE, Dept. of Electrical and Electronic EngineeringCagliari

Personalised recommendations