BT* – An Advanced Algorithm for Anytime Classification

  • Philipp Kranen
  • Marwan Hassani
  • Thomas Seidl
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7338)


In many scientific disciplines experimental data is generated at high rates resulting in a continuous stream of data. Data bases of previous measurements can be used to train classifiers that categorize newly incoming data. However, the large size of the training set can yield high classification times, e.g. for approaches that rely on nearest neighbors or kernel density estimation. Anytime algorithms circumvent this problem since they can be interrupted at will while their performance increases with additional computation time. Two important quality criteria for anytime classifiers are high accuracies for arbitrary time allowances and monotonic increase of the accuracy over time. The Bayes tree has been proposed as a naive Bayesian approach to anytime classification based on kernel density estimation. However, the employed decision process often results in an oscillating accuracy performance over time. In this paper we propose the BT* method and show in extensive experiments that it outperforms previous methods in both monotonicity and anytime accuracy and yields near perfect results on a wide range of domains.


Bayesian Network Gaussian Mixture Model Kernel Density Estimation Concept Drift Advance Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Andre, D., Stone, P.: Physiological data modeling contest, ICML 2004 (2004),
  2. 2.
    Arai, B., Das, G., Gunopulos, D., Koudas, N.: Anytime measures for top-k algorithms on exact and fuzzy data sets. VLDB Journal 18(2), 407–427 (2009)CrossRefGoogle Scholar
  3. 3.
    Bouckaert, R.R.: Naive Bayes Classifiers That Perform Well with Continuous Variables. In: Webb, G.I., Yu, X. (eds.) AI 2004. LNCS (LNAI), vol. 3339, pp. 1089–1094. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Dean, T., Boddy, M.S.: An analysis of time-dependent planning. In: AAAI, pp. 49–54 (1988)Google Scholar
  5. 5.
    DeCoste, D.: Anytime query-tuned kernel machines via cholesky factorization. In: Proc. of the 3rd SIAM SDM (2003)Google Scholar
  6. 6.
    Esmeir, S., Markovitch, S.: Anytime learning of anycost classifiers. Machine Learning, 25th Anniversary 82(3), 445–473 (2011)CrossRefGoogle Scholar
  7. 7.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  8. 8.
    Gopalakrishnan, P.S., Kanevsky, D., Nadas, A., Nahamoo, D.: An inequality for rational functions with applications to some statistical estimation problems. IEEE Transactions on Information Theory 37(1), 107–113 (1991)zbMATHCrossRefGoogle Scholar
  9. 9.
    Guttman, A.: R-trees: A dynamic index structure for spatial searching. In: ACM SIGMOD, pp. 47–57 (1984)Google Scholar
  10. 10.
    Härdle, W., Müller, M.: Multivariate and semiparametric kernel regression. In: Smoothing and Regression. Wiley Interscience (1997)Google Scholar
  11. 11.
    John, G., Langley, P.: Estimating continuous distributions in bayesian classifiers. In: UAI. Morgan Kaufmann (1995)Google Scholar
  12. 12.
    Keogh, E.J., Pazzani, M.J.: Learning the structure of augmented bayesian classifiers. Intl. Journal on AI Tools 11(4), 587–601 (2002)CrossRefGoogle Scholar
  13. 13.
    Kranen, P., Assent, I., Baldauf, C., Seidl, T.: Self-adaptive anytime stream clustering. In: ICDM, pp. 249–258 (2009)Google Scholar
  14. 14.
    Kranen, P., Günnemann, S., Fries, S., Seidl, T.: MC-Tree: Improving Bayesian Anytime Classification. In: Gertz, M., Ludäscher, B. (eds.) SSDBM 2010. LNCS, vol. 6187, pp. 252–269. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  15. 15.
    Likhachev, M., Ferguson, D., Gordon, G.J., Stentz, A., Thrun, S.: Anytime search in dynamic graphs. Artificial Intelligence 172(14), 1613–1643 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Likhachev, M., Gordon, G.J., Thrun, S.: ARA*: Anytime A* with provable bounds on sub-optimality. In: NIPS (2003)Google Scholar
  17. 17.
    Liu, C.-L., Wellman, M.P.: On state-space abstraction for anytime evaluation of bayesian networks. SIGART Bulletin 7(2), 50–57 (1996)CrossRefGoogle Scholar
  18. 18.
    Pernkopf, F., Wohlmayr, M.: Large Margin Learning of Bayesian Classifiers Based on Gaussian Mixture Models. In: Balcázar, J.L., Bonchi, F., Gionis, A., Sebag, M. (eds.) ECML PKDD 2010. LNCS, vol. 6323, pp. 50–66. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  19. 19.
    Seidl, T., Assent, I., Kranen, P., Krieger, R., Herrmann, J.: Indexing density models for incremental learning and anytime classification on data streams. In: EDBT/ICDT, pp. 311–322 (2009)Google Scholar
  20. 20.
    Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC (1986)Google Scholar
  21. 21.
    Ueno, K., Xi, X., Keogh, E.J., Lee, D.-Y.: Anytime classification using the nearest neighbor algorithm with applications to stream mining. In: ICDM, pp. 623–632 (2006)Google Scholar
  22. 22.
    Wang, H., Fan, W., Yu, P.S., Han, J.: Mining concept-drifting data streams using ensemble classifiers. In: KDD, pp. 226–235 (2003)Google Scholar
  23. 23.
    Yang, Y., Webb, G.I., Korb, K.B., Ting, K.M.: Classifying under computational resource constraints: anytime classification using probabilistic estimators. Machine Learning 69(1) (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Philipp Kranen
    • 1
  • Marwan Hassani
    • 1
  • Thomas Seidl
    • 1
  1. 1.RWTH Aachen UniversityGermany

Personalised recommendations