SAMIA: A bottom-up learning method using a simulated annealing algorithm

  • Pierre Brézellec
  • Henri Soldano
Research Papers Genetic Algorithms
Part of the Lecture Notes in Computer Science book series (LNCS, volume 667)


This paper presents a description and an experimental evaluation of SAMIA, a learning system which induces characteristic concept descriptions from positive instances, negative instances and a background knowledge theory. The resulting concept description is expressed as a disjunction of conjunctive terms in a propositional language. SAMIA works in three steps. The first step consists in an exhaustive use of the theory in order to extend the instances representation. Then the learning component combines a bottom-up induction process and a simulated annealing strategy which performs a search through the concept description space. During the final step, the theory is used again in order to reduce each conjunctive term of the resulting formula to a minimal representation. The paper reports the results of several experiments and compares the performance of SAMIA with two other learning methods, namely ID and CN. Accuracies on test instances and concept description sizes are compared. The experiments indicate that SAMIA's classification accuracy is roughly equivalent to the two previous systems. Morever, as the results of the learning algorithms can be expressed as a set of rules, one can notice that the number of rules of SAMIA's concept descriptions is lower than both ID's and CN's one.


Bottom-Up Algorithms Characteristic Descriptions Concept Learning Simulated Annealing 


  1. [Bisson 92]
    Gilles Bisson, “Conceptual Clustering in a First Order Logic Representation”, Tenth European Conference on Artificial Intelligence, Vienna 92, Bernd Neumann, pp 458–468.Google Scholar
  2. [Boswell 90a]
    J. Boswell, “Manual for NewID version 2.1”, The Turing Institute, January 1990.Google Scholar
  3. [Boswell 90b]
    J. Boswell, “Manual for CN2”, The Turing Institute, January 1990.Google Scholar
  4. [Brunak, Engelbrecht & Knudsen 91]
    Soren Brunak, Jacob Engelbrecht, Steen Knudsen, “Prediction of Human mRNA Donor and Acceptor Sites from the DNA Sequence”, Journal of Molecular Biology (n∘ 220, 1991), pp 49–65.Google Scholar
  5. [Brézellec & Champesme 92]
    Pierre Brézellec, Marc Champesme, “Vers un système d'apprentissage moins sensible au bruit, et aux descriptions et théories initiales”, Huitième Congrès Reconnaissance des Formes et Intelligence Artificielle, Lyon-Villeurbanne 91, pp 945–952.Google Scholar
  6. [Collins, Eglese & Golden 88]
    N.E. Collins, R.W. Eglese, B.L. Golden, “Simulated annealing — An annotated bibliography”, American journal of mathematical and management sciences 8, pp 209–307.Google Scholar
  7. [Clark & Niblett 89]
    Peter Clark, Tim Niblett, “The CN2 Induction Algorithm”, Machine Learning (volume 3, number 4, March 89), Kluwer Academic Publishers, pp 261–283.Google Scholar
  8. [Drastal, Czako & Raatz 89]
    G. Drastal, R. Czako, S. Raatz, “Induction in an abstraction space: A form of constructive induction”, Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit 89, Morgan Kaufmann, pp 708–712.Google Scholar
  9. [4]
    Ganter B., Rindefrey K., Skorsky M. “Software for a formal concept analysis”, Classification as a tool of Research, North Holland 1986.Google Scholar
  10. [Liquière & Méphu Nguifo 90]
    Michel Liquière, Engelbert Méphu Nguifo, “LEarning with GAlois Lattice: Un système d'apprentissage de concepts à partir d'exemples”, Cinquièmes Journées Françaises d'Apprentissage, Lanion 90, pp 93–113.Google Scholar
  11. [Kodratoff & Ganascia 86]
    Yves Kodratoff, Jean-Gabriel Ganascia, “Improving the generalization step in Learning”, Machine Learning, An Artificial Approach (volume II), Morgan Kaufman (1986), pp 215–244.Google Scholar
  12. [Kudo & Shimbo 89]
    Mineichi Kudo, Masaru Shimbo, “Optimal subclasses with dichotomous variables for features selection and discirmination”, IEEE Trans. Systems, Man, Cybern., 19, pp 1194–1199.Google Scholar
  13. [Michalski 84]
    R.S. Michalski, “A Theory and Methodology of Inductive Learning”, Machine Learning: An Artificial Approach (volume I), Springer Verlag (1984), pp 83–129.Google Scholar
  14. [Michalski & al 86]
    Ryszard S. Michalski, Igor Mozetic, Jiarong Hong, Nada Lavrac, “The multi-purpose incremental learning system AQ15 and its testing application to three medical domains”, Proceedings of the Fifth National Conference on Artificial Intelligence, Morgan Kaufman, pp 1041–1045.Google Scholar
  15. [Quinlan 86]
    J.R. Quinlan, “Induction of Decision Trees”, Machine Learning (volume 1, number 1, 1986), Kluwer Academic Publishers, pp 81–106.Google Scholar
  16. [Quinlan 88]
    J.R. Quinlan, “An Empirical Comparison of Genetic and Decision-Tree Classifier”, Proceedings of the Fifth International Conference on Machine Learning, Ann Arbor 88, pp 135–141.Google Scholar
  17. [Rendell & Cho 90]
    Larry Rendell, Howard Cho, “Empirical Learning as a Function of Concept Character”, Machine Learning (volume 5, number 3, August 90), Kluwer Academic Publishers, pp 267–298.Google Scholar
  18. [Roussel-Ragot 90]
    P. Rousel-Ragot, “La méthode du recuit simulé: Accélération et parallélisation”, Thèse de doctorat de l'Université Paris 6.Google Scholar
  19. [Rouveirol 90]
    Celine Rouveirol, “Saturation: Postponing Choices when Inverting Resolution”, Seventh International Conference on Machine Learning, Austin 90, pp 557–562.Google Scholar
  20. [Towell, Shavlik & Noordewier 90]
    Geoffrey G. Towell, Jude W. Shavlik, Michiel O. Noordewier, “Refinement of Approximate Domains Theories by Knowledge-Based Neural Networks”, AAAI 90, pp 861–866.Google Scholar
  21. [Van de Velde 89]
    Walter Van de Velde, “IDL, or Taming the Multiplexer”, Proceedings of the Fourth European Working Session on Learning, Montpellier 89, Morgan Kaufmann, pp 211–225.Google Scholar
  22. [Wilson 87]
    S. W. Wilson, “Classifier Systems and the Animat Problem”, Machine Learning (volume 2, number 4, 1987), Kluwer Academic Publishers, pp 199–226.Google Scholar
  23. [Zhang & Michalski 89]
    Jianping Zhang, Rysard S. Michalski, “Rule Optimization Via SG-Trunc Method”, Proceedings of the Fourth European Working Session on Learning, Montpellier 89, Morgan Kaufmann, pp 251–262.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Pierre Brézellec
    • 1
    • 2
  • Henri Soldano
    • 1
    • 2
  1. 1.LIPN (CNRS/URA 1507) Université Paris NordVilletaneuse
  2. 2.Section Physique-Chimie, A.B.I 11Institut CurieParis

Personalised recommendations