Automatic Detection of Go–Based Patterns in CA Model of Vegetable Populations: Experiments on Geta Pattern Recognition

  • Stefania Bandini
  • Sara Manzoni
  • Stefano Redaelli
  • Leonardo Vanneschi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4173)


The paper presents an empirical study aiming at evaluating and comparing several Machine Learning (ML) classification techniques in the automatic recognition of known patterns. The main motivations of this work is to select best performing classification techniques where target classes are based on the occurrence of known patterns in configurations of a forest system modeled according to Cellular Automata. Best performing ML classifiers will be adopted for the study of ecosystem dynamics within an interdisciplinary research collaboration between computer scientists, biologists and ecosystem managers (Cellular Automata For Forest Ecosystems – CAFFE project). One of the main aims of the CAFFE project is the development of an analysis method based on recognition in CA state configurations of spatial patterns whose interpretations are inspired by the Chinese Go game.


Support Vector Machine Cellular Automaton Cellular Automaton Cellular Automaton Model Sequential Minimal Optimization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bandini, S., Pavesi, G.: Simulation of vegetable populations dynamics based on cellular automata. In: Bandini, S., Chopard, B., Tomassini, M. (eds.) ACRI 2002. LNCS, vol. 2493, Springer, Heidelberg (2002)CrossRefGoogle Scholar
  2. 2.
    Bandini, S., Manzoni, S., Sand, S., Redaelli, G.M.: Emergent pattern interpretation in vegetable population dynamics. Internal Journal of Unconventional Computing - special issue on selected papers from AUTOMATA 2005 workshop (to appear)Google Scholar
  3. 3.
    Soletti, G.: Note di Go. FIGG (Federazione Italiana Giuoco Go), available for download at
  4. 4.
    Mitchell, T.: Machine Learning. McGraw Hill, New York (1996)MATHGoogle Scholar
  5. 5.
    Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. Wiley-Interscience, New York (1973)MATHGoogle Scholar
  6. 6.
    McCallum, A., Nigam, K.: A comparison of event models for naive bayes text classification. In: AAAI-98 Workshop on Learning for Text Categorization (1998)Google Scholar
  7. 7.
    Rish, I.: An empirical study of the naive bayes classifier. In: IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence (2001)Google Scholar
  8. 8.
    Rosenblatt, F.: Principle of Neurodynamics. Spartan Books, Washington (1958)Google Scholar
  9. 9.
    Gallant, S.I.: Perceptron-based learning algorithms. IEEE Transactions on Neural Networks, 179–191 (1990)Google Scholar
  10. 10.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 5, 359–366 (1989)CrossRefGoogle Scholar
  11. 11.
    Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B.: Bayesian Data Analysis. Chapman & Hall/CRC, Boca Raton (1995)Google Scholar
  12. 12.
    Vapnik, V.: Statistical Learning Theory. Wiley-Interscience, New York (1998)MATHGoogle Scholar
  13. 13.
    Burges, C.J.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998)CrossRefGoogle Scholar
  14. 14.
    Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Advances in Kernel Methods - Support Vector Learning (1998)Google Scholar
  15. 15.
    Keerthi, S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design (1999)Google Scholar
  16. 16.
    Cleary, J.G., Trigg, E.L.: K*: an instance-based learner using an entropic distance measure. In: Proc. of 12th International Conference on Machine Learning, pp. 108–114. Morgan Kaufmann, San Francisco (1995)Google Scholar
  17. 17.
    Quinlan, J.R.: Induction of decision trees. Machine Learning, 81–106 (1986)Google Scholar
  18. 18.
    Breiman, L., Friedman, J., Olshen, R.A., Stone, C.J.: Classification and regression trees. Wadsworth (1984)Google Scholar
  19. 19.
    Cutler, A.: Fast classification using perfect random trees. Technical Report 5/99/99, Department of Mathematics and Statistics, Utah State University, USA (1999)Google Scholar
  20. 20.
    Breiman, L.: Random forests - random features. Technical Report 576, Statistics Department, UC Berkeley, USA (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Stefania Bandini
    • 1
  • Sara Manzoni
    • 1
  • Stefano Redaelli
    • 1
  • Leonardo Vanneschi
    • 1
  1. 1.Dept. of Informatics, Systems, and CommunicationUniversity of MilanoBicocca

Personalised recommendations