On-the-Go Adaptability in the New Ant Colony Decision Forest Approach

  • Urszula Boryczka
  • Jan Kozak
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8398)


In this article we present a new and effective adaptive ant colony algorithm that we employ to construct a decision forest (aACDF). The aim of this proposition is to create an adaptive meta-ensemble based on data sets created during the algorithm runtime. This on-the-go approach allows to construct classifiers in which wrongly classified objects obtain a greater probability of being chosen for the pseudo-samples in subsequent iterations. Every pseudo-sample is created on the basis of training data. Our results confirm the standpoint that this new adaptive ACDF slightly reduces the accuracy of classification as well as builds considerably smaller decision trees.


Ant Colony Decision Forest Boosting Ant Colony Optimization Decision Forest ACDT 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Boryczka, U., Kozak, J.: Ant colony decision trees – A new method for constructing decision trees based on ant colony optimization. In: Pan, J.-S., Chen, S.-M., Nguyen, N.T. (eds.) ICCCI 2010, Part I. LNCS, vol. 6421, pp. 373–382. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  2. 2.
    Boryczka, U., Kozak, J.: An adaptive discretization in the ACDT algorithm for continuous attributes. In: Jędrzejowicz, P., Nguyen, N.T., Hoang, K. (eds.) ICCCI 2011, Part II. LNCS, vol. 6923, pp. 475–484. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  3. 3.
    Boryczka, U., Kozak, J.: Ant Colony Decision Forest Meta-ensemble. In: Nguyen, N.-T., Hoang, K., Jędrzejowicz, P. (eds.) ICCCI 2012, Part II. LNCS, vol. 7654, pp. 473–482. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  4. 4.
    Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)CrossRefzbMATHGoogle Scholar
  5. 5.
    Bühlmann, P., Hothorn, T.: Boosting algorithms: Regularization, prediction and model fitting. Statistical Science 22(4), 477–505 (2007)CrossRefzbMATHMathSciNetGoogle Scholar
  6. 6.
    Corne, D., Dorigo, M., Glover, F.: New Ideas in Optimization. Mc Graw–Hill, Cambridge (1999)Google Scholar
  7. 7.
    Dorigo, M., Maniezzo, V., Colorni, A.: The ant system: an autocatalytic optimization process. Tech. Rep. 91-016, Department of Electronics, Politecnico di Milano, Italy (1996)Google Scholar
  8. 8.
    Dorigo, M., Stützle, T.: Ant Colony Optimization. MIT Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  9. 9.
    Dorigo, M., Birattari, M., Blum, C., Clerc, M., Stützle, T., Winfield, A.F.T. (eds.): ANTS 2008. LNCS, vol. 5217. Springer, Heidelberg (2008)Google Scholar
  10. 10.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  11. 11.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)CrossRefzbMATHMathSciNetGoogle Scholar
  12. 12.
    Kearns, M.: Thoughts on hypothesis boosting, project for Ron Rivest’s machine learning course at MIT (1988)Google Scholar
  13. 13.
    Rokach, L., Maimon, O.: Data Mining With Decision Trees: Theory and Applications. World Scientific Publishing (2008)Google Scholar
  14. 14.
    Rudin, C., Schapire, R.E.: Margin-based ranking and an equivalence between AdaBoost and RankBoost. J. Mach. Learn. Res. 10, 2193–2232 (2009)zbMATHMathSciNetGoogle Scholar
  15. 15.
    Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Urszula Boryczka
    • 1
  • Jan Kozak
    • 1
  1. 1.Institute of Computer ScienceUniversity of SilesiaSosnowiecPoland

Personalised recommendations